Wolfram Science Group
Phoenix, AZ USA
Registered: Aug 2003
There is a better if more involved answer. Wolfram's original classification is based on the behavior seen from random initial conditions. From random initial conditions, rule 90 gives random looking behavior. It was therefore classified as a class 3 rule.
Later, investigating the various mechanisms for generating randomness, Wolfram thought the dividing line between systems that simply transmit randomness already present in their initial conditions to their later outputs, and those that intrinsically generate randomness even from simple inputs, was more fundamental.
The simple behavior of rule 90 from simple inputs is one sign - not yet a sufficient one, however - that it does not intrinsically generate the randomness seen. It is also an additive rule, meaning we can find the behavior from a complicated initial condition, from additive superposition of the behaviors from simpler ones.
One might want to argue that additive superposition might in some sense be thought of as adding apparent randomness or complexity. But basically, rule 90 from random initials looks random because the initials are, and it faithfully preserves the randomness already present in them.
Later, Wolfram took to calling all simple forms of behavior "class 2 behaviors". Nesting is one of those. It seemed more intricate and complex back in the early 1980s than it seems now, after it has been much more thoroughly studied. Rules that produce nesting from simple initials are quite common in a whole variety of simple program types. But NKS currently considers those simple behaviors, and calls them class 2. They are generally reducible - meaning, short cut formulas can typically be found to give the value at a later time step, at some location within the pattern, without performing all the intermediate steps the rule itself goes through to arrive at that outcome for that bit.
In current usage, rule 90 from simple initials is clearly class 2, and rule 90 from complicated initials *looks* complicated, but can be analyzed and "reduced". I call those "cracks" - as in cracking a code - apparent complexity for which a short, reducing formula can be found. Rule 90 is the paradigmatic case of a "crack".
Another case with its own issues but some similarities is the apparent complexity of the digits of successive powers of 3 in base 2 (see page 121). Or consider extended fraction representations of square roots, which have a simple form, whereas their digit representations do not (see e.g. page 914). These are all cases in which our immediate impression is of considerable complexity - and in some of them that is certainly real on some level - combined with formulas or representations that find some simplicity in them.
An open question in NKS that I call the "crackables" question is, how far do "cracks" extend into the space of apparently random, class 3 looking behavior? Wolfram thinks they are an exceptional case on the border, that reducibility "gives out" rapidly in class 3s, most of which are expected to be computationally irreducible. But one might instead wonder whether there are lots of "cracks" for various cases of apparent complexity, that simply have not been found yet. Perhaps we aren't looking at the right representation to see them, or haven't noticed the key idea needed to perform a reduction.
Wolfram's belief that it is the former rather than the latter is an empirical induction on partial evidence, rather than a deduction. We look at lots of 3s, we try to find ways to crack them. Throw a half dozen people and significant formal and computing power at the problem, looking for regularities. With some you find cracks, but it is rare. With some you can see a few elements from which you might start. With many, you can't, and no standard method of analysis seems able to reduce the complexity seen.
And this is the typical experience with cases selected as being near "complexity onset", the first few that show up as system resources are "dialed up" (by increasing allowed systems states, colors, ranges, allowed rules, etc). Beyond those, there are combinatorially larger "mountains" of rules (the number of possible rules explodes as you allow more features) with far more going on, where it seems vastly harder to look for "cracks".
That's the idea behind Wolfram's conjecture that almost all 3s are computationally irreducible. But individual cases of apparent class 3 behavior might have some simple cause and yield to the right reductive explanation. So the conjecture can also be seen as a standing invitation to show Wolfram is wrong, by providing thousands of successful "cracks". If someone did so, it would be evidence that "crackables" extend farther into the 3s, than we currently suspect.
Report this post to a moderator | IP: Logged