[Computational irreducibility and Classes 3 and 4] - A New Kind of Science: The NKS Forum

Pages:1

# Computational irreducibility and Classes 3 and 4

Posted by: Paul Reiners

What exactly are Wolfram's claims about computational irreducibility and Classes 3 and 4 CA? I seem to remember reading that he claims that all Class 3 and 4 CA are computationally irreducible. However, I could be mistaken and couldn't find any such definite claims while looking through ANKOS just now. From browsing through it again, it looks like he just claims that most, if not all, Class 3 and 4 CA are most likely computationally irreducible.

Could someone please clarify this for me? I'm writing an article and don't want to get this wrong.

Posted by: Jason Cawley

Most, most likely are - yes that is what he claims. The reasons for the qualifications are as follows.

One can sometimes find a "crack" for complicated looking behavior, some short cut formula that does successfully reduce the complexity seen. Clear class 2 cases of reducibility involve things like relating rule 90 to Pascal's triangle mod 2. Sometimes one can find something similar in more complicated cases - greatest common factor relationships e.g. - that look complicated but can be reduced once you find the right "crack".

So, a blanket statement that all complicated looking behavior is irreducible would not be correct. Why does Wolfram nevertheless think most cases are irreducible? It is a generalization, an induction if you like, from exploring many rule system "spaces", progressively "dialing up" their allowed number of elements or relations. What he found was that one quickly gets past all simple behavior. When one is still pretty close to simple behavior in this sense, one can sometimes find "cracks", but it gets very hard quite fast. Many examples appear which resist all efforts to crack them. And go a little farther out in this process, and it is hard to even start looking for a "crack".

Understand that what is peculiar about a crack is that it is generally limited to only the immediate rule one is looking at. It is not a general method. General methods of analysis typically fail when applied to class 3 or 4 rules. By that I mean things like statistical analysis, run length analysis, block frequencies, etc. (See the perception and analysis chapter for what these general methods typically manage to do).

So the intuition is, in effect, that the crackable cases are a kind of measure zero edge, with some special cases perhaps sprinkled among the farther out stuff too. Then one can ask of the rest, the sea of uncracked cases that seems to extend indefinitely as you increase allowed system complexity: "so, are there also cracks for most or all of these, that we just haven't found yet? Or is there some general method we haven't thought of, that will slice off huge chunks of them and successfully reduce whole classes at a time?"

And Wolfram's claim is that no, there doesn't have to be an as-yet unfound crack for each of them. And no, there isn't good reason to expect as yet undiscovered general methods to reduce gobs of them at a time. His reasons for this are all through chapter 12, explaining his principle of computational equivalence (PCE). We already know that such things should not be expected of the cases that are universal. (Because there are paradoxes from simultaneously assuming reducibility and universality. They don't fit each other). Instead, he thinks irreducible complexity is real, happens soon and in the typical case, and "cracks" and the like are exceptions.

Another reason is that universality proofs act as a kind of ratchet. Once you've proved that x1 and x2 are universal, you can construct an emulation proof that links x3 back to x2 only. The simpler the systems get, that you have already proved are past the universality threshold, the easier it becomes to show some additional system has enough internal resources to support universality. Some very elaborate Turing machine may be hard to emulate. But then someone shows this simple one is universal. Now you only have to emulate that one. Some very different system, like Spencer-Brown forms, only has to emulate rule 110.

As for the other qualification, the "probably", it stems from the difficulty of proving things about whole classes of complicated rules. A universality proof is an elaborate construction that uses details of a given case. So it is typically much easier to exploit the above ratchet to show e.g. there is some system A that has enough already (some, an example or small class of examples, rather than "all A with characteristic X"). Not because that is all there are. But because that is what is easier to show.

So for the bulk of the distant cases - many allowed elements, colors, settings, possible connections etc - one typically has neither a crack (constructed reduction) nor a proof (constructed demonstration there isn't one). PCE is the conjecture that it isn't the cracks that are missing. Or, if you look for cracks and do not find them, that is already evidence the case examined is irreducible (not universal, necessarily - that is a harder additional condition to show).

Not certainty, but evidence, supporting but not establishing the PCE conjecture. This is not, incidentally, a call not to look for cracks. People should look for them, and if they found enough of them (especially ones that generalize) they would argue against PCE. We'd know a lot more about complexity than we do now if that succeeded. But Wolfram does not expect it to succeed. Instead he expects cracks will remain exceptions, while withstanding all analysis - as e.g. rule 30 has already done for a decade - will be the normal outcome.

I hope this helps.

Posted by: Charlie Stromeyer jr.

"But Wolfram does not expect it to succeed. Instead he expects cracks will remain exceptions, while withstanding all analysis - as e.g. rule 30 has already done for a decade - will be the normal outcome."

By traditional criteria then, it seems to me that the PCE is a reasonable scientific hypothesis. Does Wolfram view the PCE within some kind of overall idea of computational complexity? So e.g. if some powerful quantum computer someday came up with new cracks then these cracks would not be valid counter-examples because they occur only non-classically and thus outside the framework of the PCE ?

Posted by: Jason Cawley

Wolfram subscribes to Church's thesis. That is, that (general) recursively enumerable computations are the physically realizable computations. He thinks that the eventual form of fundamental physics will be based on a discrete element rule.

One can imagine some kinds of serious success in quantum computing that would throw considerable doubt on any underlying discrete generator of QM. (Note - not all imaginable successes. Some result might show multiple possible paths are really involved, without showing the number of them is a continuum infinity's worth rather than finite or countable e.g.)

Wolfram effectively predicts that will not happen. From his point of view, a constructive success building a discrete element network based rule for fundamental physics is what will be needed to remove (most) doubts about Church's thesis stemming from physical ideas of real continuum effects.

Whether the underlying stuff of the universe is fundamentally discrete or continuous was one of Kant's antinomies of pure reason. That is, he thought it was a question that could not be decided by the human mind, where reasons could be advanced for either proposition that could not be refuted from within that position, and need not conflict with any sort of evidence. I think it is fair to say the reasons why he thought so would now be regarded as somewhat naive.

It remains possible, however, that the world just might not let us decide between the two empirically. That is, that all continuous underlying theories actually fitting experience might be generable by some discrete model, and conversely that successful discrete models can be mimicked by continuous ones. I am not saying this is the case; I even think it unlikely. But we should notice the possibility.

Effectively we may be trying to decide whether continuity is one of our abstractions used to analyse a discrete world, an abstraction that works especially at higher levels of analysis - or whether it is real, and discrete effects all arise from things like harmonics or modes of continuous vibrations and the like. We may not be able to tell, if the formal methods for each are so powerful, and empirical realities such, that some form of each can capture everything that actually happens.

If the continuous model that fits has 10 dimensions and a gazillion possible worlds the overwhelming majority of which look nothing at all like this one, and the discrete model that fits is one of the simplest possible and generates our known laws in a natural manner, then it may be reasonable to incline to the discrete universe idea. If on the other hand practical quantum computers do things that seem to exceed limits of finite systems, and if in addition discrete models of physics can't deal with e.g. the non-locality required for any hidden variable explanation of QM in a natural way, but seem to need ad hoc hypotheses and epicycles, then it may be reasonable to incline to the continuous universe idea. I don't know that I'd call either "falsification" let alone "proof" or verification. But that is the way things would look.

Wolfram's bet is on the discrete side of that. PCE is meant to be true of physically realizable computations. Now, someone else besides Wolfram - perhaps including Wolfram in the future after the quantum computer people dazzle us with their new as yet unheard of results - might keep something like PCE for a restricted class of finite element computations - a reduced PCE for classical cases only. But that is not where he is standing right now. He has stuck his neck out, that Church's thesis is right, and PCE will generalize fully. He is fully aware that will be more plausible than it may seem to some today, if a discrete fundamental physics approach is succcessful.

A fine question by the way.

For a bit of the background history, see this note -

http://www.wolframscience.com/nksonline/page-1125d-text