A New Kind of Science: The NKS Forum > Pure NKS > Two Violations of the Principle of Computational Equivalence
Author
Craig Feinstein

Registered: Nov 2004
Posts: 36

Two Violations of the Principle of Computational Equivalence

Consider the Collatz 3n+1 sequence discussed here:

http://mathworld.wolfram.com/CollatzProblem.html

If the Collatz conjecture is true, then we have an example of a sequence, the Collatz 3n+1 sequence, that is not obviously simple (because it is computationally irreducible) that always halts. Therefore, the Collatz sequence is not universal if the Collatz conjecture is true.

Wouldn't this be a clear violation of the Principle of Computational Equivalence that "almost all processes that are not obviously simple can be viewed as computations of equivalent sophistication", i.e., universal? Given that such a sequence is probably not the only sequence of its kind that probably halts but is still computationally irreducible, wouldn't this observation render the Principle of Computational Equivalence false?

A similar sequence can be found here, which is not obviously simple, yet cannot be universal: http://www.research.att.com/~njas/sequences/A000375

This sequence can be described as follows: Shuffle n cards labeled 1,...,n. If top card is k, reverse the order of top k cards. Keep doing this until the top card is 1. a(n) is the maximum possible number of steps before top card is 1.

It is easy to prove by induction that a(n) is always finite. Yet, the only way to compute a(n) is to carry out the procedure on every possible permutation of cards and then let a(n) be the maximum number of steps that it took for the top card to be 1. Such a procedure is not obviously simple, as it is computationally irreducible. Yet, it always halts and is therefore not universal.

How would those believers in the Principle of Computational Equivalence respond?

__________________
Craig

Report this post to a moderator | IP: Logged

11-07-2007 03:23 PM
Jason Cawley
Wolfram Science Group
Phoenix, AZ USA

Registered: Aug 2003
Posts: 712

Sure, there is a reason its says "almost all". And for two different possible reasons or cases.

There are clearly reducible cases that still manage to look complicated. Clearly reducible because one can find an explicit formula for the output without carrying out all intervening steps - e.g. the digits of 3^n in base 2, or rule 90 patterns from involved initial conditions. I call those cases "cracks" - as in cracking a code or a safe. When we find apparently complex behavior in a simple computational system, one of the first things we do is look for "cracks" - ways of reducing the computation to something much simpler to solve.

So one open issue for PCE is "how many cracks are there?" One might imagine that uncrackable cases are rare, and that most things that look complicated are actually crackable, and therefore our initial impressions of complexity are generally unreliable. Reducibility would be the rule, and computational irreducibility would be rare and exceptional. PCE is claiming that this is not the case, and that it is the cracks that are the simple exceptions in a sea of computational irreducibility.

Why does Wolfram think this, given the existence of the potential counterexamples? It is an intuition coming from empirical induction. He looked for the onset of complexity in all sorts of simple systems. That means reducing the internal resources of the system as much as possible, while preserving complex looking outcomes. At that borderline, one can find some "cracks". But it gets remarkably hard to find more of them.

We throw lots of standard methods of analysis at them, and many minds, and most resist ready "cracking". A few fall. Beyond the level of system resources that was barely sufficient for complex behavior, one sees a few "crackables", a few others where one can see how one might start but can't get too far, and a slew of harder cases where one can't see even how to begin. And where standard measures of how involved the behavior is (e.g. minimal boolean expressions that exploit possible substitutions and cancellations) grow rapidly with system steps or initial condition width or other system parameters.

What I call the "crackables" problem remains - how far do the computationally reducible cases extend into the apparently irreducibles - or, loosely, into the class 3s? PCE predicts the answer is "not far", but it is a prediction, not a theorem.

OK, but you were interested in the case of irreducibles that are not universal. And that is the next issue PCE raises, and you are right to notice it and to focus on it. PCE basically predicts that "irreducible" and "effectively universal" will usually coincide. I say "effectively" because strict universality generally requires "one infinity", or idealizes to unlimited running time or available memory (aka system size), or both. All of our practical computers have run for finite time and have finite memory. They are over a flexibility threshold but not over a cardinality chasm. In practice, being provably universal in those limits coincides with being programmable in the finite.

That cardinality issue aside, does irreducibility generally coincide with universality? PCE claims it does - that most irreducibles we encounter are also effectively programmable. This is easily the strongest claim PCE makes. I like to call it the class 3 problem, using class 3 as a proxy to mean "irreducible, not obviously universal", as distinct from class 4 where universality has been proven many times and it is easy to imagine it being extended to any other class 4 system. (If rule 110 has enough, other 4s probably do too - they need only be able to emulate rule 110, or the latest small TM for that matter).

So there are two broad possibilities - either irreducibility usually goes hand in hand with universality, whatever the limits of our ability to prove it in this or that case - or irreducible but strictly less than universal, is a large and well populated space, "between" the crackables on one side, and the actually universal on the other side. Are class 3 systems typically programmable? Or are they complicated enough that we can't tell what they are going to do next without explicitly simulating them, but still insufficiently rich (or perhaps, controllable?) to be universal?

One can certainly find instances of irreducible complexity that fall short of universality. In the case of 3n+1, we do not know it always halts, so we do not know it can't be universal, but it is likely. There are some other cases where it is known - e.g. Tommaso Bolognesi gave an example at NKS 2006, of class 3 looking and irreducible behavior in a "provably weaker than universal" process algebra system.

PCE predicts that the usual cause of apparent complexity is (potential) universality. This does not have to be so; it is a claim not a theorem. What is the intuition behind this claim?

Again, it is an empirical induction from looking through many system types. Behavior rich enough to support universality can usually be found and found quickly, by increasing the system's available internal parameters somewhat. Beyond the level necessary for the first instance of universality, Wolfram thinks it is common. The process of proving systems universal works as a ratchet - each simple system proven universal provides another "easier" target for the next attempt to emulate. It is clearly easier to emulate a 2 color CA than every possible TM, or a TM with a tiny number of states than every possible TM.

One develops a sense of the size of the possibility spaces of systems, looking at a whole variety of them and enumerating them to examine how they behave. The less simple cases are larger, as well as harder to analyse, and generally complex behavior gets more and more common. You are already across a universality threshold at very low internal complexities. The vast numbers of system of the same type but more internal resources look like a sea of complexity beyond some narrow band of systems simple enough to analyze.

Clearly, universal is enough to imply irreducible, since from some inputs a universal system can emulate any algorithm. That they are almost all universal beyond a point is sufficient to explain the complexity seen. It may not be necessary, and that is the class 3 problem.

Wolfram's induction could be wrong. The space of "unprogrammable irreducibility" might be large, rather than a thinly populated borderland between systems too simple to do anything complicated and systems rich enough to support arbitrary computations. Wolfram is sticking his neck out claiming that irreducible and universal will mostly coincide. He is predicting that lots of universality proof attempts could succeed where others might expect them to fail. If he is right, on the other hand, it would tell us why most complicated systems do not yield to conventional analysis - because they are as complicated as anything in our universe.

Many of the methodological results of NKS do not turn on the question. Once we have hit irreducibility, whether it is in principle universal or not, we have to simulate and do computer experiments. But other, technological ones very well might. If we can in principle program behaviors that look like a mess, that is a different world of possible technical control or invention, than if most messy things are destined to remain uncontrollable as well as unpredictable.

Either way, investigating the question will teach us plenty about complexity, and that is the value of a big conjecture or a scientific hypothesis. It lights up the terrain ahead and informs future work. You can have any doubts you like about the class 3 problem and do NKS. It is the most prominent open problem in the field. Whenever new systems that looked like they might be between reducible and universal are proven to actually fall into either set, that is evidence that PCE is correct. Whenever another system or system class is found that does not fall into either set, but instead is irreducible without being universal, it is evidence against PCE.

I hope this is interesting.

Report this post to a moderator | IP: Logged

11-07-2007 06:06 PM
Craig Feinstein

Registered: Nov 2004
Posts: 36

OK, I think that answers my question. I have a comment:

I think that a better way to phrase the Principle of Computational Equivalence is: "almost all processes that are not obviously simple are computationally irreducible."

This, I think, is the fact which makes the book NKS unique. Not only this, but the book NKS provides a lot more evidence for this statement than the stronger statement that "almost all processes that are not obviously simple are universal". Furthermore, who cares if a process is universal or not? No one is going to build a universal computer out of processes like Rule 110, as these types of processes are not efficient. Universality is historically the method of choice for proving that something is intractable, but that doesn't mean it is the only method for proving intractability.

And the statement that "almost all processes that are not obviously simple are computationally irreducible" is definitely saying something contrary to the way that most modern mathematicians see things. Most modern mathematicians believe they can find "cracks", as you call them, in almost anything they are given.

Based on my conversations with professors and students, it seems that their minds work on the a priori assumption that "almost all processes that are not obviously simple are computationally reducible", i.e., can be cracked, if one is clever enough. It is very difficult to convince them that this is not true. I have claimed that there are some famous open conjectures, e.g. the Collatz conjecture and the Riemann Hypothesis, that are unprovable because they predict the behavior of infinitely many computationally irreducible processes. If they are unprovable, then this means that the conventional deductive methodology of mathematics research, finding formal proofs for conjectures, is not as powerful as modern mathematics students are indoctrinated to believe, and perhaps this is something that modern mathematicians are not prepared to face.

Also, anyone who learns Chaitin's Incompleteness theorem, which describes Chaitin's number Omega, has no problem with the conclusion that "almost all processes that are not obviously simple are computationally irreducible", as this is implied by the fact that only finitely many bits of Omega are computable and Omega is a random number. So it seems to me that most of the empirical observations discussed in Wolfram's NKS are a natural consequence of Chaitin's Incompleteness theorem.

__________________
Craig

Last edited by Craig Feinstein on 11-08-2007 at 05:36 PM

Report this post to a moderator | IP: Logged

11-08-2007 05:16 PM

wolframscience.com  |  wolfram atlas  |  NKS online  |  web resources  |  contact us