Wolfram Science Group
Phoenix, AZ USA
Registered: Aug 2003
Most, most likely are - yes that is what he claims. The reasons for the qualifications are as follows.
One can sometimes find a "crack" for complicated looking behavior, some short cut formula that does successfully reduce the complexity seen. Clear class 2 cases of reducibility involve things like relating rule 90 to Pascal's triangle mod 2. Sometimes one can find something similar in more complicated cases - greatest common factor relationships e.g. - that look complicated but can be reduced once you find the right "crack".
So, a blanket statement that all complicated looking behavior is irreducible would not be correct. Why does Wolfram nevertheless think most cases are irreducible? It is a generalization, an induction if you like, from exploring many rule system "spaces", progressively "dialing up" their allowed number of elements or relations. What he found was that one quickly gets past all simple behavior. When one is still pretty close to simple behavior in this sense, one can sometimes find "cracks", but it gets very hard quite fast. Many examples appear which resist all efforts to crack them. And go a little farther out in this process, and it is hard to even start looking for a "crack".
Understand that what is peculiar about a crack is that it is generally limited to only the immediate rule one is looking at. It is not a general method. General methods of analysis typically fail when applied to class 3 or 4 rules. By that I mean things like statistical analysis, run length analysis, block frequencies, etc. (See the perception and analysis chapter for what these general methods typically manage to do).
So the intuition is, in effect, that the crackable cases are a kind of measure zero edge, with some special cases perhaps sprinkled among the farther out stuff too. Then one can ask of the rest, the sea of uncracked cases that seems to extend indefinitely as you increase allowed system complexity: "so, are there also cracks for most or all of these, that we just haven't found yet? Or is there some general method we haven't thought of, that will slice off huge chunks of them and successfully reduce whole classes at a time?"
And Wolfram's claim is that no, there doesn't have to be an as-yet unfound crack for each of them. And no, there isn't good reason to expect as yet undiscovered general methods to reduce gobs of them at a time. His reasons for this are all through chapter 12, explaining his principle of computational equivalence (PCE). We already know that such things should not be expected of the cases that are universal. (Because there are paradoxes from simultaneously assuming reducibility and universality. They don't fit each other). Instead, he thinks irreducible complexity is real, happens soon and in the typical case, and "cracks" and the like are exceptions.
Another reason is that universality proofs act as a kind of ratchet. Once you've proved that x1 and x2 are universal, you can construct an emulation proof that links x3 back to x2 only. The simpler the systems get, that you have already proved are past the universality threshold, the easier it becomes to show some additional system has enough internal resources to support universality. Some very elaborate Turing machine may be hard to emulate. But then someone shows this simple one is universal. Now you only have to emulate that one. Some very different system, like Spencer-Brown forms, only has to emulate rule 110.
As for the other qualification, the "probably", it stems from the difficulty of proving things about whole classes of complicated rules. A universality proof is an elaborate construction that uses details of a given case. So it is typically much easier to exploit the above ratchet to show e.g. there is some system A that has enough already (some, an example or small class of examples, rather than "all A with characteristic X"). Not because that is all there are. But because that is what is easier to show.
So for the bulk of the distant cases - many allowed elements, colors, settings, possible connections etc - one typically has neither a crack (constructed reduction) nor a proof (constructed demonstration there isn't one). PCE is the conjecture that it isn't the cracks that are missing. Or, if you look for cracks and do not find them, that is already evidence the case examined is irreducible (not universal, necessarily - that is a harder additional condition to show).
Not certainty, but evidence, supporting but not establishing the PCE conjecture. This is not, incidentally, a call not to look for cracks. People should look for them, and if they found enough of them (especially ones that generalize) they would argue against PCE. We'd know a lot more about complexity than we do now if that succeeded. But Wolfram does not expect it to succeed. Instead he expects cracks will remain exceptions, while withstanding all analysis - as e.g. rule 30 has already done for a decade - will be the normal outcome.
I hope this helps.
Report this post to a moderator | IP: Logged