Wolfram Science Group
Phoenix, AZ USA
Registered: Aug 2003
That sounds likely to me. I am wondering about places the same sort of analysis might be extended, with a bit less ensemble thinking and a bit more of the search of a possibility space idea. Obviously they did quite a bit of that on the code possibility side, which is great, and why I like the article so much as NKS-like. But what about doing it on the error value side (i.e. "unpack" the measure)?
That is, instead of asking of each code possibility, what is its numerically averaged value for 64 single letter changes, take some more restricted set of possible codes, and look at how much the protein structure is altered with each of those one-move changes, and with combined pairs. So, the "wiring" of the code is semi-fixed (a small set, the natural one and a few others, say one more "optimized" and one seemingly random etc), the 1 and maybe 2 switch "errors" are enumerated rather than averaged. Then you look at base pair sequences as the big possibility space, and gets lots of raw data rather than just an average for each code. Starting from short amino acid chains, plowing up as high (in chain length) as it stays practical to look at a fair portion of the possibilities.
See the idea? Instead of a big space of codes (what they did, useful, but they already did it) and averaging of possible errors, a big space of (all pretty short) proteins and a significant, but tractable, space of explicit possible errors.
The other thing the "pretty good but not optimal" result they found reminds me of is the stuff in the NKS book on approaching satisfaction of complicated discrete constraints by an iterative procedure. Where one typically sees rapid improvement, then it slows, and eventually it crawls. The last bits do not "fall" to iterative improvement, when the constraint set is complicated, rather than e.g. some one dimensional local minimum. Those experiments are typically set up to ask in effect whether some measure of fit improves or not with discrete flips.
So, one could imagine trying to set up a toy version of a possible evolutionary path to a code. Does this one assignment change make the resulting coding score higher or lower on an error sensitivity measure (like the one they used), than the code before that one switch? To try to get a handle on how easy it is to "walk" to good error scores when you take only one step at a time and only detect "improvement" locally.
Report this post to a moderator | IP: Logged