wolframscience.com

A New Kind of Science: The NKS Forum : Powered by vBulletin version 2.3.0 A New Kind of Science: The NKS Forum > NKS Way of Thinking > Defining the Classes of Elementary Cellular Automata
  Last Thread   Next Thread
Author
Thread Post New Thread    Post A Reply
Lawrence J. Thaden


Registered: Jan 2004
Posts: 356

Defining the Classes of Elementary Cellular Automata

According to some philosophers there are three kinds of definitions:

1. Nominal,
2. Descriptive, and
3. Explanatory.

Nominal definitions tell us about the correct use of names.

Descriptive definitions tell us about the things named in terms of our experience.

Explanatory definitions make assertions about the things that nominal definitions name and these assertions are independent of our experience.


Nominal definitions

Here is an example of a nominal definition of Class 1 cellular automaton behavior: I hold up a graph and you identify it as Class 1 and Stephen Wolfram agrees with us.

If we were to catalog all 256 graphs on page 55-56 of the NKS book we would have a set of nominal definitions for elementary cellular automata categorized according to the four classes.

To my knowledge this catalog is not publicly available.


Descriptive definitions

There are descriptions of the four classes in the NKS book on pages 231 and 235.


Class 1 description

“In class 1, the behavior is very simple, and almost all initial conditions lead to exactly the same uniform final state.”


Class 2 description

“In class 2, there are many different possible final states, but all of them consist just of a certain set of simple structures that either remain the same forever or repeat every few steps.”


Class 3 description

“In class 3, the behavior is more complicated, and seems in many respects random, although triangles and other small-scale structures are essentially always at some level seen.”


Class 4 description

“… class 4 involves a mixture of order and randomness: localized structures are produced which on their own are fairly simple, but these structures move around and interact with each other in very complicated ways.”

On page 235 Wolfram continues:

“I originally discovered these four classes … by looking at thousands of pictures… And at first … I based my classification purely on the general visual appearance of the patterns I saw.

“But when I studied more detailed properties of cellular automata, what I found was that most of these properties were closely correlated with the classes that I had already identified.”

So we have descriptions that define the four named classes in terms of our experience of their visual appearance.


Explanatory definitions

These definitions tell us about the classes in terms that are independent of our visual experience of cellular automata.

For example, on page 58 of the NKS book there are examples of class 2 cellular automata that produce fractal patterns and the fractal dimension is given for each.

Clearly a fractal dimension is a term that has meaning independent of our visual experience.

This kind of definition is usually stated in geometric terms like circumference equals Pi times diameter, or physics terms like energy equals mass times the speed of light squared.


Question for the forum

Is it consistent with the NKS way of thinking to hold out hope that we will ever see purely explanatory definitions of the four classes?

__________________
L. J. Thaden

Report this post to a moderator | IP: Logged

Old Post 05-14-2004 05:13 AM
Lawrence J. Thaden is offline Click Here to See the Profile for Lawrence J. Thaden Click here to Send Lawrence J. Thaden a Private Message Click Here to Email Lawrence J. Thaden Edit/Delete Message Reply w/Quote
Jason Cawley
Wolfram Science Group
Phoenix, AZ USA

Registered: Aug 2003
Posts: 712

I think that is certainly possible and potentially useful. Occasional cases can be found that are intermediate between classes on descriptive definitions. For any give choice of explanatory definitions, some of these may fall in one or another class. But that is only a minor problem.

If anyone wants to consider the information to be categorized to arrive at explanatory definitions in the sense you state, already worked out differences between CAs, in objective terms, would be needed. The Altas will have lots of them. Many CA properties have been tabulated already for the 256 elementary rules. A reference for some of them can be found here -

http://www.stephenwolfram.com/publi...ndix/index.html

Report this post to a moderator | IP: Logged

Old Post 05-14-2004 03:39 PM
Jason Cawley is offline Click Here to See the Profile for Jason Cawley Click here to Send Jason Cawley a Private Message Edit/Delete Message Reply w/Quote
janos

CT

Registered: Nov 2004
Posts: 23

I am looking for a 4th type of classification: "procedural" or "programatic", that is without looking at it on screen or paper, just from the run to classify it. Preferable would be a function which could take the CA in question and after some itterative steps and measure s it would make a statement like: "This CA is Class 3 with 86 % probability.

It would be also good to define some metric on the set of CAs and if the "distance" of that CA is below a threshold with a well known representative of the class as etalon, then it belongs to that class, otherwise it does not.

What do you think ?

J‡nos

__________________
--------------------------------------
"..because Annushka has already bought
sunflower oil, and not only bought it, but
spilled it too."
Bulgakov: Master and Margarita

Report this post to a moderator | IP: Logged

Old Post 02-16-2005 02:55 PM
janos is offline Click Here to See the Profile for janos Click here to Send janos a Private Message Edit/Delete Message Reply w/Quote
Tony Smith
Meme Media
Melbourne, Australia

Registered: Oct 2003
Posts: 168

Taking human perceptions out of the loop

János, the most I can do is try to talk about your question in fairly general terms, but even that will reveal serious obstacles to trying to over-automate the classification process.

The first problem which I have written about here previously and elsewhere is the difference between examining running processes and examing their outcomes. There really is no such thing as a Class 4 outcome, save maybe a process not stabilising. Rather, Class 4 processes tend to produce either Class 2 or Class 3 outcomes, depending sensitively on the data with which they are seeded.

A lot of good work has been done by various people to detect certain kinds of patterned stabilisation, such as long cycle repetition and even "spaceship" detection. (The existence of spaceship-type moving signals may be a necessary condition for Class 4, such phenomena having played an indispensible role in proofs to date of universality.) I've even built one process which detects repetitions with periods up to 4294967292 but that too is another story.

For the point of this exercise, lets assume that we have both a method for detecting Class 2 outcomes and another method for detecting Class 3 (random) outcomes. Some people will argue about the possibility of detecting intrinsic randomness, but I'm happy to run with Wolfram's position on this. More than once I have become overly interested in computationally extravagent processes that periodically spit out a single binary digit that there appears to be no significantly simpler way of determining, the sequence of which, like say the digits of π, appears to pass all tests of randomness.

So the outline of your procedure or program might be something like:

for each rule {
. . for each seed configuration {
. . . . iterate rule
. . . . until regularity detected
. . . . . . or randomness detected
. . . . . . or not stopping presumed
. . }
. . count outcomes by iterations to detect
}
assign to Class depending on mix of outcomes across seeds

I still have my suspicions that there are going to be rules which produce a mix of quite rapid Class 2 and Class 3 outcomes but which are not universal. I would therefore only tend to count as Class 4 those rules which sometimes take many iterations to demonstrate unambiguous regularity or unambiguous randomness. (This has led me to the idea that there may be some rules better described as Class 2/3 than Class 4, which I included in a diagram in my recent response to Jason in another thread.

My ideas here are mostly influenced by the results I have accumulated to date studying (Conway's) Life "in a Tube", that is in a narrow cylindrical universe. Thus confined, the Life rule typically takes a lot longer to demonstrate unambiguous regularity or randomness than does Life on an open rectanglar grid, in my opinion showing Life in a Tube to be even deeper in Class 4 than Life itself. Of course this is the antithesis of the PCE, but that is where I get when I try to weigh the simplicity of seed configurations alongside the simplicity of rules.

I have reattached a small animated GIF of a simple enough example of Life in a Tube which I call "Blinker Gobblers". The pair of gobblers arise from a simple seed configuration in a circumference 14 tube, shown rolled out five times in the GIF as an aid to human perception, and single gobblers arise from a number of other simple seeds on other tube circumferences. It is a pattern which exhibits periodicity, growth and movement that might test any process designed to detect regularity, yet its fate can be predicted with certainty and so it is an unambiguous Class 2 outcome.

Tony Smith has attached this image:

__________________
Tony Smith
Complex Systems Analyst
TransForum developer
Local organiser

Report this post to a moderator | IP: Logged

Old Post 02-17-2005 02:03 AM
Tony Smith is offline Click Here to See the Profile for Tony Smith Click here to Send Tony Smith a Private Message Visit Tony Smith's homepage! Edit/Delete Message Reply w/Quote
Jason Cawley
Wolfram Science Group
Phoenix, AZ USA

Registered: Aug 2003
Posts: 712

First to state what should be obvious but might not be - universality is not part of the definition of class 4. That class 4s are in general probably capable of universal computation in the infinite size limit, is a result, not something that proceeds from a definition. Class 4 is not meant to designate "such algorithms as are capable of universal computation".

Class 4 means - from typical (not special or measure zero) random initials (not whole ensembles of initials taken as a programmer's degree of freedom, and not simple initials), typically resolves into a number of localized structures that themselves move about and interact, in often complicated ways.

Class 3 means - from typical random initials, produces random chaotic patterns.

These can be operationalized, certainly. You can set up measures that classify automatically. A good one for class 3 is simply the level of entropy, as usually defined - it tends to be stable and high for class 3s. You don't have to look at all initials. Indeed, it is well known that special initial conditions can often force class 2 behavior from class 3 rules (e.g. start rule 30 from alternating black - white - black - white - ... - you get simple stripes).

Any given operational definition (simple transform or formula) may disagree with how we'd classify the rule when looking at its whole evolution, for some small class of rules. And some methods of detection may more readily detect e.g. 2-ness, while not distinguishing 3 and 4, say. What I mean by an operational definition is a formal mapping or function from a sample of rule behaviors to an integer 1, 2, 3 or 4. It should be code, no intervening words.

Andy Wuensche noticed a decent classification scheme based on the absolute level and the variance in entropy, taken as two dimensions. Class 1 go to entropy zero at some point and of course stay there. Class 2s typically resolve to low entropy in a short period and the variance in the entropy within a given run is low. Class 3s typically have high entropy, nearly maximal, and it is usually stable, thus low variance. Class 4s have middling to high entropy measure but also show variance in entropy over the course of a run.

Short initial transients can obscure these results a bit but are easy to throw out. Neither system size nor sample size need to be particularly high, but you don't want to use a width so tiny the system is forced to repeat, nor single runs that may give unusual entropy measure etc. In practice, for e.g. CAs, if you use widths of 50-100 and use the second half of an evolution of 100-200 steps, and a sample size of 20, 50, or 100 runs from different initials, you will not get misleading results. Typical initial transients in class 2s can be as short as log the number of possible starting states. 4s will still be on their much longer transients. 3s started from random initials are generally fully "scrambled" in just a few steps, and anything that is going to go to a single eventual state (class 1) is usually forced to do so very rapidly by tight constraints in the underlying rule.

(One semi-exception to that last - there are some "cancelling" rules that typically look 2-ish but that die exactly if started from exactly equal black and white concentration. They typically reach their end state - whatever it will be - as soon as information has had a chance to spread from each part of the initial, to every location in the system).

Repetition is easy to test for - just ask if the current line is exactly reproduce in any above it. Since the rule is deterministic and looks only 1 step back in time, if it hits the same pattern at any point it must cycle from that point. For different rule set ups with dependence on multiple steps, you have to test larger blocks in time, but the principle is the same. You can also test for "2-ness" by looking for shifted versions of a previous line - is "RotateRight[previous,n] = previous" - since the most common form of 2-ness besides identity is some sort of shift-map that leaves structures unchanged.

I should point out, however, that the exact sense of the classification scheme is not dependent on the infinite time behavior of the system, but on its typical behavior from random initials. Infinite time behavior classification is an older dynamical systems idea, one that was in the air at the time Wolfram came up with the classification, certainly. They were thinking of continuous functions mapped in a phase space or similar parameter space, and looking for accumulation points, limit cycles, or bounded regions. Typical simple rules are not continuous maps.

And the behavior one cares about includes the (longer) transients and typical dynamics, not just an infinite limit. In the absolutely infinite limit, all finite-state deterministic systems must repeat - they have only a finite possible "target set" of future states, and hitting any one already hit forces a cycle. If a rule starts from a single black cell, grows a Taj Mahal, encodes elaborate running commentary on a Chopin prelude, and then reverts to a single black cell and does it all over again - we would not describe it as "class 2". The underlying NKS intuition is that transients do matter if they are long and anything interesting is happening during them - that indeed the universe is just such a long transient.

Notice, in the hypothetical case above, measured entropy would be variable over the run, and sometimes quite high. A mapping like that used by Wuensche would tag it as class 4.

That is an attempt to measure what the actual classification describes, but it is not a perfect measure because it is aimed at a typical characteristic but not the defining one. What the class is meant to refer to as originally stated is "localized structures". Which one might instead try to measure as similar regions - areas that are the same up to a shift - in an overall pattern that is not repeated nor equivalent to a shifted version of a previous step.

Compared to a 3, one will see subpatterns smaller than the overall that repeat far more often. There are some such patterns in 3s, but they are limited in scale, like random fluctuations. One might try to construct a direct detector for that rather than entropy variation. In a 3, a block frequency spectrum should show a rapidly declining number of blocks as the length increases.

In a 4, there could be a characteristic scale (e.g. the most common background in rule 110 repeats blocks 14 cells wide every 7 steps), but there will be much larger regions that are similar to each other (equivalent up to a shift). In 110 for example out of 2^14 possible blocks 14 wide you will see way too many of some of them to be the result of chance. LOcalized structure. Combine that with not actually detecting 2-ness, and you will have a 4-ness indicator.

Borderline cases where the background is small might give some trouble, though. E.g. in rule 54, the repetitions are 3 blacks 1 white etc or the reverse. But there are far more of those than blocks of 2, so the block frequency spectrum will still look different from a class 3 rule. Just might be a bit harder to notice it, so close to the small-block edge of the spectrum.

I hope this helps give some idea of the classification system and ways of detecting it by code, rather than just visual impressions.

Report this post to a moderator | IP: Logged

Old Post 02-17-2005 06:23 PM
Jason Cawley is offline Click Here to See the Profile for Jason Cawley Click here to Send Jason Cawley a Private Message Edit/Delete Message Reply w/Quote
janos

CT

Registered: Nov 2004
Posts: 23

Steven Wolfram visited Yale on this Monday, and "Manipulate" looks promising. His lecture started this classification issue in me as I was driving home. I started some explorations yesterday. I did not look yet into the NKS book to see if Wolfram did similar ones or not, so please correct me id I am on secret ground.

I looked only the k=2, r=1 cases. I tried to recreate his page from the book where he shows all of them in a page, but GraphicsArray just did not want to cooperate with me. So I created a Table like this:

ca = Table[CellularAutomaton[{i, k, r}, Join[Table[0, {l, 255}], {1}], 255], {
i, 1, 255}];

Note that my initial conditions are 255 white on the left and one black cell on the right.

Then I created an entropy like quantity - its is entropy like only because it contains the Log of some other quantity.

lp = Table[Map[N[Log[FromDigits[Reverse[#]]]] &, ca[[i]] ], {i, 255}];

One element in lp is a list of the Logs of numbers where the number was created from the status of the automata at a given step. I wanted to see how these numbers are for all the elementary rules.

As you can see I am back in the "looking with the eye" stage. One has to start somewhere :). Then I was thinking that it might be interesting to see a similar value for the Transpose of the CA, that is looking every cell "down" from the top and take the value it is created as that position changed its value from step to step, so I created another table:

tp = Table[Map[N[Log[
FromDigits[Reverse[#]]]] &, Transpose[ca[[i]] ] ], {i, 255}];

Then I was thinking it might be interested to see these two tables in relation to each other and look if any clustering is happening there. I had to make sure that the two tables are equal lengths. So, I created a fourth table:

tl = Table[Thread[List[lp[[i]], tp[[i]] ] ], {i, 255}];

and plotted all these side by side. Then it came to me that it would be nice if I can see how the different values on a graph would look like, so I created a fifth table:

tlgp = Table[Map[#[[1]] -> #[[2]] &, Partition[tl[[i]], 2, 1]], {i, 255}];

and plotted one automata in a row with all their created table elements with:

Table[Show[
GraphicsArray[{ArrayPlot[ca[[i]], DisplayFunction -> Identity], \
ListPlot[lp[[
i]], AspectRatio -> 1, PlotRange -> All,
AxesOrigin -> Automatic, DisplayFunction -> Identity],
ListPlot[tp[[i]], AspectRatio -> 1,
PlotRange -> All, AxesOrigin -> Automatic, DisplayFunction ->
Identity], ListPlot[tl[[
i]], AspectRatio -> 1, PlotRange -> All,
AxesOrigin -> Automatic, DisplayFunction -> Identity],
GraphPlot[tlgp[[i]], "RootPosition" ->
Center, Method -> "Automatic", DisplayFunction -> Identity]}],
DisplayFunction -> $DisplayFunction, PlotLabel -> {i}], {i,
255}];


At the end I through in

<<RealTime3D

and looked the particulary interested graphs with GraphPlot3D, like:

GraphPlot3D[tlgp[[3]], "RootPosition" -> Center, Method -> "Automatic"]

Now I know why rule 110 is universal. It has two ears to hear and a nice necktie to be member of the intelligentia !!

GraphPlot3D[tlgp[[110]], "RootPosition" -> Center, Method -> "Automatic"]

Looks like all Class 3 shows some interesting triangular clustering pattern in tl[[rule]] - the forth plot in a row. Also all Class 4 type show a different kind of clustering in tl with a wide gap between the cluster elements. It is also interesting that all Class 3 is a straight line in tlgp. On the same time almost all Class 2 and Class 4 has some closed graphs in tlgp.

The reason I wanted to see the GraphPlot of the tlgp-s is that when you look the Listplot of tl[[45]] and tl[[110]] with PlotJoined->True, it shows some structure. So I wanted to see which point on it is connected to another and that was why I ended up to create the tlgp table.

Interestingly the tlgp[102] is a perfect 16 sided polygon.
With the best,

J‡nos

__________________
--------------------------------------
"..because Annushka has already bought
sunflower oil, and not only bought it, but
spilled it too."
Bulgakov: Master and Margarita

Report this post to a moderator | IP: Logged

Old Post 02-18-2005 10:25 PM
janos is offline Click Here to See the Profile for janos Click here to Send janos a Private Message Edit/Delete Message Reply w/Quote
Post New Thread    Post A Reply
  Last Thread   Next Thread
Show Printable Version | Email this Page | Subscribe to this Thread


 

wolframscience.com  |  wolfram atlas  |  NKS online  |  Wolfram|Alpha  |  Wolfram Science Summer School  |  web resources  |  contact us

Forum Sponsored by Wolfram Research

© 2004-14 Wolfram Research, Inc. | Powered by vBulletin 2.3.0 © 2000-2002 Jelsoft Enterprises, Ltd. | Disclaimer | Archives