A New Kind of Science: The NKS Forum > Pure NKS > The Shape of Elementary CA Landscape
Author
Shane Simonsen

Australia

Registered: Oct 2010
Posts: 5

The Shape of Elementary CA Landscape

I have done some work on the shape of the landscape for evolving elementary cellular automata.

My background is in protein biochemistry/NMR etc so I have a perspective from the problem of folding a protein structure to meet simple conditions (e.g. Atom 1 is this close to Atom 2 etc). This multidimensional problem has a potential energy surface that can suffer from local minima, so rotating/moving parts of the protein to get closer to some requirements leads to a state that cannot access any adjaceant conformations with lower energy, often missing out on moving on to the overall lowest energy form.

So there are a lot of parallels with evolving CAs. The simplest condition is that you can only change one of the 8 component rules at a time. So each rule has 8 nearest neighbors.

If you can quantify the complexity (or at least randomness) of the starting state and nearest neighbors you can select the neighboring state with the highest complexity/randomness, then repeat the evaluation on its nearest neighbors.

So looking at the 256 elementary CAs I set out to come up with an automated way to measure randomness (gradient method in a separate post) then mapped the differences in gradients across all the nearest neighbors. Since my background isn't in computer science I ended up doing it all in open office calc by hand (rather hypnotic fun with a good 20th century music sound track).

Starting with a single black cell, the results indicate there are 12 rule classes (including equivalent rules) that represent local minima (ie have no adjacent rules with greater complexity/randomness). Unfortunately I have since figured out that starting with a 50% black random initial condition gives more reliable gradients, but haven't found the motivation to redo the very lengthy processing. A quick analysis of changed input conditions showed a moderate percentage of rules evolve in different directions.

Generally the rules with greater complexity manage to attract a larger number of starting rules toward them during evolution (simple graph below the data). Starting with a random initial condition would presumably make this relationship clearer.

So does this gradient method represent a useful way to start at a random spot in a more complex rule space to search for something complex/random? (eg 4 inputs deciding the fate of each output cell, 2^2^4 = 65536 rules).

Am I right in anticipating that as rule systems become more complex the evolution landscape should become smoother and more connected? (making it easier to find a highly complex/random finishing point and sampling a broader effective space).

The biggest issue I have with the approach is- what is so great about finding ever more random CAs? Complexity and some kind of mythical class 5 or above systems are what we are really salivating over. The method for evaluating complexity probably needs some refining.

Would it be possible to have a target pattern- like a grainy picture of Abe Lincoln- to search for, assigning points based on how close the observed patterns come to it. Surely at some level of rule complexity it should be possible to generate any output?

The other issue is- exploring one rule set is fun enough, but can the method be extended to jumping from one rule set to another (evolve the rules themselves, and evolve the rules for evolving the rules). This feels very biological to me- fine tuning of limited parameters within one species, then occasional massive shifts in the genetic structure during speciation events.

Enough panting from me- let me know what you think (including if I am crazy, lazy or redundant).

PS- I would love some way of organising the actual visual patterns together to show how they evolve toward complexity- any ideas on how this could be done? It is beyond my computational skill but intuitively it feels like it might reveal something interesting.

Shane Simonsen has attached this image:

Report this post to a moderator | IP: Logged

10-27-2010 03:16 AM
Ivan Antonowitz

Registered: Jul 2004
Posts: 5

My background is also in Organic Chemistry NMR. When simulating NMR spectra using chemical shifts and coupling constants, the patterns obtained were based upon the series:

(n) nuclei ::= (2^2^n)

We had to double the power of the computer for every nucleus included, so the Chemists had to break their molecules into 'digestible' fragments.

As a technician I was spared this messy process, and instead had the major advantage that I could examine the 'same' data from both the perspective of the Free Induction Decay [FID] and the plotted Spectrum. Think of the FID as summarizing the NMR machine's attributes whilst the Spectum summarized the attributes of the Sample.

This Duality is fundamental to the theory of the BIT as per Shannon_De_Morgan.

As far as I can tell, Cellular Automata are completely isomorphic to the Formal Symbolic Logic of the Sentential Calculus. In other words Shane Simonsen is looking to 'reduce' the patterns of his spectra into the parameters of the Chemical shifts and Coupling constants of his nuclei.

There is a nasty headache when the magnetic field of the NMR is not strong enough to keep the Spectrum first order and we get broad Quantum Mixing. I found a neat trick though. When simulating spectra it was rathr tricky to get the sign values of the coupling constants correct. Simply rerun the spectrum at a lower magnetic field and the quantum mixed pattern will not match if the Coupling Constants are not assigned the correct Signs.

My point of focus is on the Duality of the BIT which seems to be missing when discussing CA.

Report this post to a moderator | IP: Logged

11-13-2010 06:45 AM
Shane Simonsen

Australia

Registered: Oct 2010
Posts: 5

Classifying CAs

I had a background in NMR as well so I get where you are coming from.

At the moment the approach isn't perfect but it seems to be a useful enough tool to allow a single number to let you hone in on rules with the level of complexity you are interested in. I am still very interested to hear if anyone else has tried the method, or already came up with a similar approach. Can anyone else run through the method and see if it works in their hands?

Thanks

Shane

Report this post to a moderator | IP: Logged

11-22-2010 08:04 AM

wolframscience.com  |  wolfram atlas  |  NKS online  |  Wolfram|Alpha  |  Wolfram Science Summer School  |  web resources  |  contact us