A New Kind of Science: The NKS Forum > NKS Way of Thinking > Are elementary CA really simple?
Author
IanLuxmoore

NEW ZEALAND

Registered: Apr 2006
Posts: 10

Are elementary CA really simple?

Hi everyone

I have just begun my journey into the world of compexity and all that goes with it and am looking to incorporate the key learnings of this exciting field into my phd study into national energy systems which I have just started. Anyway I stumbled across NKS after discovering the magic of CA and have just finished reading the book.

Clearly the most fundamental point of the book is that simple systems can produce complex results. Having that statement reiterated every second page in the book forced me to examine that statement very closely to make sure that i both understood what it meant and that i agreed with it.

I concur entirely that the results from these systems are complex and this was demonstrated wonderfully in the book.

However what was missing in my view was justification of the first half of the message - are elementary CA actually sufficiently simple systems to justify this claim? After considerable thought I am unsure that this is actually the case and then after even more thought, even more uncertain what is meant by "simple rules" in the context of this fundamental statement.

In part their simplicity comes from the fact that they can be communicated simply (eg Rule 30 or perhaps by the pictures) and those familiar with the theory would know exactly what was meant. But this simplicity of communication does not demonstrate simplicity in itself. I could talk to you of "New York" and you would also know what I meant. I dont see anyone claiming New York is a simple system ;)

The main reason for my uncertainty as to their simplicity is the *hidden* rules of an elementary CA. For starters there are the dimensions of the space it exists in. Since these are in fact a critical part of the system and not a naturally occuring state they must in fact be part of the rules. Among other things these hidden rules MUST state that:

1) The world is essentially 3 Dimensional - thse being:

a) Time (or "down")
b) Space (or "across")
c) Colour

(yes i know this deviates slightly from traditional meanings of dimension but the word serves its purpose).

2) Each of these dimensions is absolutely discrete (ie there is no 1/2 a second of time if each time step represented a second for example)

3) All elements are defined by squares

4) Nothing can change after it has occured.

5) A single discrete element has no recognition of anything more than 1 cell "away"

6) The system has a boundary (or it doesn't - but in some sense it must even if its defined by memory or something)

7) Time is spatial.

And I am sure there are many more. In fact my realisation that these systems are potentially not simple comes from trying to implement basic CA in code. You have to do substantially more coding than just typing in a 30 to get rule 30 behaviour. You have to make all the above rules and others work as well otherwise you get some bizarre behaviour. It is these rules that differentiate an elementary CA from a mobile automata or other type and therefore in reality form a critical part of the rules of the so called "elementary" cellular autotomata.

The other difficulty comes in the way we visualise things. Take for example Conways game of life and a Rule 30 Automata. They are represented in a quite similar fashion - with black or white squares on a grid. But they are very very different systems. In fact in terms of the way they work they are almost entirely different in so many ways. But the results look kinda similar in a sense. The game of life is singificantly more complex than an elementary CA - but is either simple?

The differences in these "hidden" rules are as substantial and arguably much more important than the explicit rule itself (eg rule 30 or Life).

I am curious how other people would answer the question in my topic:

Are elementary CA really simple?

I look forward to your thoughts!

Cheers
Ian

Report this post to a moderator | IP: Logged

04-22-2006 02:08 AM
Jason Cawley
Wolfram Science Group
Phoenix, AZ USA

Registered: Aug 2003
Posts: 712

It is a reasonable thought, but it only gets you to the end of chapter 2.

By that I mean, starting in chapter 3 of the book, Wolfram considers the idea that special features of CAs might be necessary to produce the great complexity seen in the behavior of rules like 30 or 110. He approaches the question by systematically stripping away every individual special feature of the elementary CAs, and considering other rule systems that lack that specific feature. And then he pushes the enumeration of systems of that new type, until he finds complex behavior again.

He does not have to go far. He never fails to find it. Ergo, the specific features dropped were not necessary to produce the complex behaviors seen.

Mobile automata keep discreteness, time steps, cells, etc. But they lack the intense parallel updating of CAs. They have fundamentally less going on, with only a single active site. You have to go a little farther to find non-trivial behavior when only a single site is updating. But you find it.

Turing machines don't have neighborhoods. They consider only the site value right where the active site is. If interaction of information from neighboring sites were essential to the phenomena seen, then TMs would lack it. But instead, very simple TMs are already showing complex behavior. Not yet at the 2 state, 2 color, but not far beyond it. The book later gives the simplest TMs known to be universal, based on ability to emulate rule 110.

Substitution systems don't retain a fixed array of sites. They more readily yield nesting, and you might think that is all they could give. But push a little farther and you find substitution systems that gives more complicated behavior.

Tag systems are in a sense a simplification of substitution systems. Rules that remove only 2 elements per step is enough to give complex behavior. He pushes the simplification further with cyclic tag systems, which have limited new block types to add.

Register machines don't have multiple discrete elements interacting in a cell-like way or a string-like way. Each register is just a number. With just two numerical registers, he had to push to 8 allowed instructions to find complex behavior, but did find it.

Next he shows symbolic systems - and in the notes, historical idealizations of logic called combinators - that give arbitrarily complex behavior. s-k combinators are a known universal system, one of the simplest possible.

The stripping of characteristics of CAs does not stop at the end of chapter 3. In chapter 4 he finds the same phenomena in numerical systems, number theory, digit representations, and the like. He shows complexity in iterated maps, as already explored by chaos theory, and relates that better known case to the other cases seen earlier in the book. By the end of the chapter he is dropping discrete values, first showing complexity in continuously valued ("grey level color") cellular automata, and then showing it in differential equations, which lack the notion of a time step.

So what is left of the distinct features of CAs by the end of chapter 4, that might be a "smuggled" form of complexity? Not much. He explores whether more dimensions adds anything fundamentally new and finds it does not. He explores whether explicitly multi-valued transitions - rather than one-track determinism - changes something essential and finds it does not. He explores explicit injection of randomness, and finds intrinsically generated randomness as already seen in the ECAs can already give similar behaviors.

Before he turns to real world systems, he has in the purely formal first half of the book, demolished the intuition that "only CAs" do this, or that there is any single magically feature they possess, that lets them produce complex behaviors. Some features of the typical behaviors of simple programs are easier to notice or to see rapidly in CAs, and it may be somewhat easier to tell other things aren't causing it.

But the result itself is remarkably robust and indifferent to the specific details of the rule system chosen. Always, you see some very simple cases that can't do much, and then pushed just a tiny bit farther in terms of number of allowed states or other internal resources, they start doing fundamentally complicated things.

Later in chapter 11 he shows why that should not be so unexpected - it is entirely plausible that instances of these very simple systems are already over the threshold of computational universality. Meaning they could in principle be used as "instruction sets" for a "CPU", a "complier" written to translate any desired program into that rule system's terms, and then be able to implement any computable algorithm. In chapter 12 he makes a heroic generalization from that inductive series, and conjectures that almost every system we see that isn't doing something very simply, is already as computationally sophisticated as anything in our universe.

Meaning, it sure as heck isn't just about CAs.

I hope this helps.

Report this post to a moderator | IP: Logged

04-22-2006 03:07 AM
IanLuxmoore

NEW ZEALAND

Registered: Apr 2006
Posts: 10

In response and clarification, the main direction of my thinking was not so much that the conclusion of the book is wrong or that CA are special cases, but that the systems he discusses as having simple rules are not quite so simple as he implies. I guess I also made the mistake of overfocusing on CAs.

I agree entirely that the complexity of the results may not necessecarily come FROM a defined complexity within the rules and that discovery in itself is fascinating and profound. However the complexity MUST come from the rules in some sense, and this includes the hidden rules. Most likely it is the way they interact that provides the complexity rather than directly but whichever way you look at it, that complete set of rules does generate that complexity.

But my thinking is more that the system itself is not literally so simple as is implied in NKS. The complexity required in writing code to represent such a system demonstrates this (granted its not rocket science to do - but its not "simple" either).

It is true that other types of systems (and arguably all systems) also demonstrate this complex behaviour like those in CAs at some threshold of rule complexity from relatively simple rules and that is also profound.

However I don't however think those systems are entirely simple either. Each system has an important list of hidden rules LIKE that for elementary CA even if the particular rules are different. Although a mobile automata for example has no parrallel updating, is it actually any simpler than a CA? It has a different set of hidden rules that define how it works but I don't see how they are substantively less complex?

In order to code a program that can emulate any of the systems in NKS you have to write relatively complex code. So while that may mean that CA are not a special case (which is very well deomonstrated in NKS), it does not demonstrate that CAs (or any other automatons) are inherently simple systems or that the rules are completely simple - nor does it demonstrate that any of the other systems such as mobile automata or turing machines are inherently simple either. So while the particular rules may not matter, the nature of the rules are still in some way not-simple (maybe the use of the word complex is confusing?)

In fact I'm struggling to conceptualise what a simple system actually would be or whether such a thing actually exists?

Perhaps I am bouncing dangerously close to chaos theory here lol.

Anyway I hope this makes some sense

Thanks again for your time and thoughts!

Ian

Report this post to a moderator | IP: Logged

04-22-2006 04:21 AM
Jason Cawley
Wolfram Science Group
Phoenix, AZ USA

Registered: Aug 2003
Posts: 712

One quick reply, one slow one.

There are plenty of rules or system types that are clearly simple, not only at the level of their rules or the set up, but also in their ultimate behavior. You won't ever find any complexity in the function that returns 0 for all inputs. Or in its CA analog, rule 0. In no reasonable sense is the successor function, which returns its input plus 1, complicated - even though it has an infinite number of possible outputs, and what it returns is different for each of an infinite number of inputs. So clearly, fundamentally simple rules exist. Going a bit farther, no 2 state, 2 color Turing Machine is ever going to do anything interesting.

And universal rules are clearly not like that. It is not a matter of terms or usage, but a real formal difference.

When we say that simple programs produce complicated results, it does not mean that we secretly smuggled in the complicated result part and "really" they are complicated rules. I understand there is a traditional intuition from engineering, that we only get complexity out if we deliberately put it in. But this intuition is simply wrong. Fundamentally, hopelessly, empirically wrong. Intuition is a wonderful thing and often a useful guide, but sometimes it is wrong. And intuition dies hard.

If you write a C program a million lines long, you aren't surprised if it does complicated things. It has enourmous internal resources, ways to vary what it does in special case after special case, elaborate chains of things to do, branches to go down or avoid, all connected back to each other in a spaghetti-like network. If you look at the circuit design for a CPU, you will see a hardware version of similar complexity.

But nothing remotely like that kind of complex, specialized, designed set up, is necessary to produce strictly arbitrarily complex behavior. No, it is not being smuggled in from the overall system set up, say the definition or lay out of a CA - the set up is nearly irrelevant and can be varied as you please, and the same phenomenon will crop up again.

The relationship between intricacy in at the system definition stage, and complexity out at the behavior stage, is not linear. It is a step function, a threshold effect. Clearly the million line C program and the CPU are over on the "complex as you please" side. But they have company there. And you get there a lot faster than people supposed in the past.

On the constant function or rule 0 side, you can find rules so simple they can never do anything complicated. Slightly more internal resources, and you can get a couple more varieties of behavior, but really only a couple more. Those are then seen over and over in simple system after simple system. Constants, shifts, periodic behaviors, nesting. In set up after set up, those and no more. When on the near side of the system resources divide.

And then you pass a threshold, and you instead see great internal complexity. You don't have to have more colors than rule 0 has. You don't have to have a different set up. The specific difference between say a rule 250 (which makes a growing checkerboard pattern) and rule 30 or 110, is not smuggled in to different details of the set up. They do not have different status under the usual suspects - designed or not, huge specifications or short ones, deterministic or not, purely formal or empirical, etc. All those attempted "cuts" put them in the same bin. But one lot do fundamentally simple things, and the other do not.

Consider a case like the 3n+1 problem - where it may be the eventual behavior is always "simple" in a specific sense, but where it at least so far seems arbitrarily hard to prove that is always the case. Given any number, if odd multiple by 3 and add 1, if even, divide by 2. Repeat. It is easy to see that 1 goes to 4, then 2 then 1, and cycles. It is not known whether all numbers land on that same cycle or not. 3 does, by way of 5 which does, by way of 16 and 8. If an odd does, then so does the even that is its double. 7 does, by way of 11 and 17 and 13. Etc.

But it is some computational work to verify of each number, that it eventually lands on the 4-2-1 "attractor". Is this complexity somehow smuggled in to the set up? Does it exploit lots of cells, or parallelism? It is already there in fundamentally simple questions in number theory.

We can notice two critical components of the 3n+1 example that may indeed be needed. It has conditionality - it does something different for evens and for odds. And it has the notion of doing something over and over again, that one word "repeat". That appears to be about all one can say is really required. And it is precious little.

But 3n+1 may be a borderline case, because somebody might prove a theorem that all numbers end on the same cycle. Would that make the 3n+1 example a fundamentally simple behavior? It isn't fundamentally simple to prove that theorem, at any rate, or somebody would have done it by now.

Now, we can sort rules by the behavior they show after we calculate what that behavior will be. Then we see some on the simplicity side of the divide, and some on the complex side. Or we can sort rules by the variety of elements or allowed interrelations we let them have, internally. Then we find if those are very, very low, we get only simple behaviors, but if they rise, at all appreciably, past constant functions or a couple of allowed internal states, and have anything that can serve as conditionality and any way of doing things over and over, then some of them will have arbitrarily complicated behavior. Without any more resources or any different set up, from other instances just like them that always give simple behavior. Just different details.

Below the very low threshold, details do not matter. The behavior is simple. Above it, the details matter. Some behave simply, many behave with arbitrary complexity. Not more and more complex as internal resources are increased. Right now, bam, all you will ever get. Once the details matter, you can't tell from the set up whether the behavior will be complicated. You have to actually perform the computational work, and see.

That is why a fixed instruction set can let a CPU implement any million line C program you care to write, and get exactly the behavior you intended - if you write it correctly, at least! (The same formal dependence on details will make it hard to predict the effect of any bug you happen to introduce). And that instruction set could be vastly simpler and the same would still be true. "Compilers" would link any of the simple universal systems discovered in NKS or before it by others, and typical real instruction sets from engineering. (Which are no doubt easier to work with, but not capable of anything fundamentally more involved, on the behavior side).

Programmable complexity is easy to get. It does not need to be smuggled in. Nobody designed rule 110 to be universal, it was simply found by search, in one of the smallest spaces of simple programs. Programmable complexity - and its knowledge limiting consequence, formal undecidability - appear where nobody put them, without anyone designing or intending it. We discover it like a law of mathematics, we do not have to make it. And it crops up about three steps into the world of simple programs, just beyond the door.

I hope this helps.

Report this post to a moderator | IP: Logged

04-24-2006 09:46 PM
Peter Forsberg

Registered: Jun 2004
Posts: 6

A question

I am inclined to agree that rule-110 should not be called a simple set of rules. There are a number of unspoken software "rules" that are necessary for the "rules" of rule-110 to be implemented. Rule-110 might not require many lines of mathematica program code. But when translated into machine code on a computer platform of choice, rule-110 will contain massively more lines of code. This can hardly be called a simple program. It is simple in a high level language specially adapted to CAs, but not on the machine level. And that is the level that matters.

Added to the software level we have the hardware "rules". The machine code version of rule-110 runs on a computer. A computer is a complex machine, that consists on the core level of transistors connected in a complex fashion. We can call this transistor layout the hardware "rules" of rule-110. When you think of it like this you realise that rule-110 is actually a complex set of rules.

Is there a way to implement rule-110 other than on a computer?

Yes there is. You can do it by hand on a piece of paper for example. But this is arguably more complex that running it on a computer. Now you are running rule-110 on the human brain which contains 100 billion neurons connected in such a complex pattern that it has not been reverse engineered to this day. In the NKS book Woolfram points to seashells that have cellular automata patterns. If it indeed is a case of cellular automata in action, these automata are executed on a collection of organic cells. Also a complex machinery. The organic cell is complex in itself, and we are talking about a collection of organic cells.

Is it possible to build a really simple machine with a just a few moveable parts that can execute rule-110? This question has been bothering me since I read the book. Otherwise I don't think it is fair to call rule-110 simple.

Wolfram makes a big issue of that rule-110 is universal. But he is already running rule-110 on a "universal" computer. Not so surprising that some simple programs do not act like a straigth jacket but instead unleaches the universal computer that already exists underneath.

Report this post to a moderator | IP: Logged

05-05-2006 02:01 PM
Jason Cawley
Wolfram Science Group
Phoenix, AZ USA

Registered: Aug 2003
Posts: 712

If you translate any program for one universal machine into another, you will change its length, and in general may increase it significantly. But there is nothing magically more simple about the instruction sets used by practical computers in this regard. Most of them are much more complicated than rule 110.

You could easily make a circuit board in which locations are wired together by and/or gates in conformity with the rules of 110 - or let them be programmable, to take any CA rule. It was actually done in the 1980s by some CA enthusiasts. So the instruction set - the machine level, the hardware - was CA rules, not those of assembly language, for example. You could if you liked encode other computations into any universal CA, and then run it on such a card. Then CA operations would be the simplest, while e.g. single lines of C code might require long representations etc. And those would then count as "high level".

There is no magical distinction between hardware and software levels. Universality is of practical importance precisely because it tells us we can work in software with a fixed machine, without it preventing us from doing any computation we want. Optimized hardware for particular problems can speed up those problems. But in practice it is much easier to just to write optimizing compilers and speed up a fixed layout, than to rejigger layouts.

No, CAs are not more complicated than machine level assembly. They are less so. No, the simplicity of their implementation in Mathematica is not due to massive libraries under the hood or anything remotely like it. Mathematica lets you manipulate arbitrary symbolic systems easily, so it is good at that sort of thing. But the formal layout itself is not at all complicated.

You are doing a logical combination of bits from 3 neighboring locations and one rule-specific location. Which is no more complicated than adding 4 one digit numbers - less, actually. You do it many times, and the results of some of the prior steps influence later ones - that's all.

Brian Silverman and Danny Hillis built a computer out of tinker toys once, to illustrate to people who imagine that something magical happens in solid state electronics, that no, it is perfectly simple what each step is, and the computational abilities we see as a result, are purely a matter of doing lots of those simple steps, related to the previous ones in anything beyond the most trivial ways.

What NKS adds is the awareness that universality is so easy to get, we can easily see how systems that weren't designed to be capable of it, are still capable of it. In other words, it helps us see the point that computation is a natural thing we have discovered, not something we invented that never existed before. Nature has been computing right along.

Report this post to a moderator | IP: Logged

05-05-2006 06:49 PM
Mark Farrugia

Registered: Apr 2006
Posts: 6

Peter,

First, note that by your reasoning, nothing that we do or observe as humans can be considered "simple" because all of our actions and observations are occurring on a highly complex machine, the brain, and therefore it must "impose" a certain degree of complexity on our actions and observations. That doesn't sound right.

Second, "simple programs" like CAs, or any other type of properly defined program, cannot unleash the underlying capabilities of the machine it is being run on, unless the program itself is already capable of producing these capabilities. This is, of course, because the machine is doing nothing more and nothing less than exactly what the program is telling it to do, even though perhaps the machine can "do" much more.

In other words, Rule 110 is "intrinsically" universal. Its universality does not come from the "outside". This implies that any "system" that has *at least* the minimum capability to carry out the simple rule specification of Rule 110, like your PC, will itself be universal. But it takes nothing as complex as a PC to emulate Rule 110.

Note the fact that we use "universal computers" to do our everyday computations like say, emulate cellular automata, is a matter of convenience. Otherwise, we would have to go to our local "Cellular Automata Depot" to buy each hard-wired CA rule we would wish to experiment with. And yes, there would be a Rule 110 aisle, full of machines that "compute" Rule 110, and with no moving parts.

(Aside: If you were a "storekeeper" for such a store, then for the 2-color, nearest-neighbour CA section (k=2, r=1), you would have to maintain stock for 256 distinct machines. For 3-colors, about 7.6 trillion machines. That's alot of real-estate. Note that "universality" is why you buy software more than machine hardware, and what makes computing "spatially feasible".)

If now this is where you are thinking to yourself that these hardwired Rule 110 machines cannot possibly be simple due to their electronic components such as logic gates and transistors, which are themselves intricately wired together, then convince yourself that using "electricity" to do our computations is another convenience, and what better way to "control" electricity than to use known physical devices that, yes, control electricity. The electricity simply encodes binary values, the wires propagate it, and the gates simply encode the rules. All just a very handy physical phenomenon for computing. What is essential is that we have some system with two distinct values and some relations defined between them.

Finally, that Rule 110 is theoretically capable of universal computation is a non-trivial discovery that has many implications.

Consider the pictorial description of Rule 110. It is simple. So too is its natural language description. Compare it with say the "blueprints" of modern computers, massively parallel supercomputers, or even the formal description of Turing machines. Which is the simplest? Rule 110 no doubt, and by a huge factor in many cases.

And although the highly-engineered computers can compute things much more easily in a much faster time frame and much smaller space than meek ol' Rule 110 can (thankfully), theoretically speaking they are all equivalent in that they are capable of computing the exact same set of functions.

So now within our universe, when you see some complex physical process churning away, performing its arbitrary abstract computation dutifully and unerringly, do not be surprised if you cannot predict its behavior by reducing it to say, a set of compact mathematical equations, or the like. Proof by Rule 110.

Last edited by Mark Farrugia on 05-05-2006 at 10:19 PM

Report this post to a moderator | IP: Logged

05-05-2006 07:17 PM
Peter Forsberg

Registered: Jun 2004
Posts: 6

Interesting

I did not realise that it would be fairly easy to construct a CA from a few logic gates. I understand that now after both of your replies. I don't remember that the book mentions this fact. But it was a few years since I read the book.

Another question. Since building a specific CA can be quite simple one would think that they would appear in nature on their own. Have the mechanisms for a specific CA ever been found in nature? Or is it still a hypothesis that CAs are abundant in nature?

regards

Peter

Report this post to a moderator | IP: Logged

05-08-2006 06:37 AM
Jason Cawley
Wolfram Science Group
Phoenix, AZ USA

Registered: Aug 2003
Posts: 712

It is an hypothesis that CAs are abundant in nature. But yes, instances have been found.

First there is evidence, but not explicit confirmation, from things like pigmentation patterns, that numerous cases of biological coloring effectively follow CA rules. Mottled camou and zebra stripes appear in 2D CAs, pigment on certain sea shells shows the same variety seen in 1D CAs, etc. Alan Turing already worked on such stuff back in the 1950s.

A more direct and recent example where the mechanism is known with much greater specificity, which might count more as a confirmation than the evidence-matching examples above, is regulation of pores for respiration (stomata) on leaf surfaces. There was a paper in Nature about it in 2003. Basically, local CA like rules regulate the propogation of stomata openings and closings across the leaf surface.

CA based models of crystal growth are another instance of CAs in nature. The simplest ones, you don't really need the whole CA set up (uniform growth on a facet face is trivially "like a CA", but also has a simpler model). More elaborate cases like snowflakes, you do need something CA-like, whether you get it in simple discrete fashion, or incorporate continuous variables at site locations ("lattice boltzmann" like methods).

Under the right assumptions or idealized limits, direct formal correspondance between CA models of fluids, and the traditional Navier-Stokes equations, has been proven. The CA model passes in the limit of infinitessimal cell size and unbounded particle number, to the differential equation results.

More generally, NKS tells us to expect the same kind of complexity we see in simple programs - a more general class than CAs - in all sorts of natural systems. We should therefore not expect such complexity to have specific and limited causes restricted to small regions of nature e.g. only biological systems optimized by millions of years of natural selection, or only engineered systems designed to have complex properties.

Instead we should see complexity in systems without anything special in the way of components - in fracture or crystal growth or turbulent flow e.g. - for reasons fundamentally the same as in living or artificial systems. The threshold of programmable complexity should be low and its incidence widespread. That is the hypothesis.

Report this post to a moderator | IP: Logged

05-08-2006 05:43 PM
Jesse Nochella
WRI

Registered: Mar 2004
Posts: 132

I heard that microtubules are essentially cellular automata from watching an interview with Stuart Hameroff. Here is a page that has some interesting pictures detailing this claim. I have not read it yet.

Report this post to a moderator | IP: Logged

05-08-2006 10:42 PM
IanLuxmoore

NEW ZEALAND

Registered: Apr 2006
Posts: 10

The biggest problem I have with the apparent simplicity of elementary cellular automata is that while the rules may appear simple, the mechanism for calculating and reproducing the results is also a part of the rules - otherwise the rules themselves are meaningless.

A question: what is the simplest possible mechanism for implementing an elemntary CA? I don't even know where to start answering this question, anyone have any ideas?

As an aside can someone clarify some (perhaps pedantic) terminology for me? Is rule 110 complex or are the results of rule 110 when implemented and viewed over time complex? Because the way i see it Rule 0 is EXACTLY as complex as rule 110, but the results viewed over time are much simpler.

Report this post to a moderator | IP: Logged

05-10-2006 10:48 AM

Registered: Jan 2004
Posts: 350

The simplest mechanism I have found so far is to let a cell’s digits index the digits of a randomly selected adjacent neighbor. The result is the cell’s new value.

With this arrangement there is no "rule" apart from the initial conditions. Just the mechanism itself.

__________________

Report this post to a moderator | IP: Logged

05-10-2006 01:40 PM
Mark Farrugia

Registered: Apr 2006
Posts: 6

Like all things, "complexity" is a relative term. In means nothing in and of itself.

In terms of rule setup, Rule 0 is "just as complex" as Rule 110. They have the same schema. As for their running behaviour over time, Rule 110 is obviously "more complex" than Rule 0. Measure this by their predictability: What bit string is produced at time step t? Easy/Reducible for Rule 0, while Hard/Irreducible for Rule 110. (this is a good thing: it is important to have both types of behaviour)

In the same manner, in terms of schema, a PC is "more complex" than Rule 110, but in terms of behaiviour over (infinite) time, they are "equally complex". Both can compute the exact same set of functions.

Report this post to a moderator | IP: Logged

05-10-2006 05:34 PM
IanLuxmoore

NEW ZEALAND

Registered: Apr 2006
Posts: 10

Originally posted by Lawrence J. Thaden
The simplest mechanism I have found so far is to let a cell’s digits index the digits of a randomly selected adjacent neighbor. The result is the cell’s new value.

With this arrangement there is no "rule" apart from the initial conditions. Just the mechanism itself.

Sounds interesting, how would you physically implement this though?

What I'm playing around with is trying to find the simplest possible mechanism for implementing an elementary CA such as rule 110. (I assume any mechanism capable of producing Rule 110 is capable of any of the 256 rules).

From a simple point of view the ultimate simplicity of a set of Rules (not results) is defined by the simplest possible implementation of those rules... and I can't think of any particularly simple implementation methods for an elementary CA but as I said before I dont really know where to look.

Report this post to a moderator | IP: Logged

05-12-2006 03:01 AM
Jason Cawley
Wolfram Science Group
Phoenix, AZ USA

Registered: Aug 2003
Posts: 712

You can implement any single CA rule in hardware as an electronic circuit on a breadboard - you just construct logic gates from top to bottom through the pattern, 3 above feeding to 1 below etc. If you want it readily readable by a human, locations that get current can be used to light up lights or whatever. If you want to change to any CA rule easily, you might instead use a field programmable gate array (FPGA). The hardware is really quite irrelevant to the complexity question.

Ways of expressing rule 110 in particular, besides using IntegerDigits[110,2,8] to fill out the general 8 case ECA rule table, include Xor[Or[p, q], And[p, q, r]], or Mod[(1 + p) q r + q + r, 2]. You can find a list of the Boolean equivalents of all the ECAs on page 884. None are long, none are hard to implement. They can't be, as the form of an ECA ensures 3 {0,1} inputs determines a single {0,1} output. The rest is just the adjacency layout, duplicating the same local connections across and down the whole array.

Report this post to a moderator | IP: Logged

05-12-2006 12:14 PM