Wolfram Science Group
Phoenix, AZ USA
Registered: Aug 2003
Of course systems that are computationally equivalent in PCE terms can have other differences. Equivalence as a concept always refers to some aspects. When we say that one pentium chip is like another, we do not mean they occupy the same space coordinates. Computational equivalence of two systems does not mean they have the same economic value. It means the space of possible computations each can be programmed to eventually reach is the same.
Not all systems are computationally equivalent in this sense. Rule 250 can't be programmed to do anything one wants. There isn't enough going on. You can fit a section of an arbitrary signal with a fourier expansion, but not with one sin wave, or a single constant. It is meaningful to say universality is a threshold because there are all sorts of things it is possible to do, and others it is impossible to do, below it. That change when one crosses that threshold.
Perhaps the difficulty is intuition from engineering computer science rather than from theoretical science and modeling. Suppose I set you the task, predict the behavior of a universal system following an unknown rule. Is your task made trivial by the fact that the number of elements in the system is "only" 64,000? (Anybody who thinks so, write a master level Go program and collect prizes. Large possibility spaces are inherently intractable to span).
But the number of elements in the system can be 6.022x10^23, and the result predictable enough for all practical purposes - if the behavior those elements follow is simple enough in form. The practical question is, what internal computational variety and interconnection does the system contain? If there is more than a smidgen, across the universality threshold, then any significant size will make the system computational intractable. (Not unattackable, but requiring a definite approach - one in which moderately higher speed won't help much at all). If there is practically none so the resulting behavior is simple, then size won't matter, reduction will be possible, and even a slide rule will give decent results. It is not the size of the system or the processing power thrown at it that matters for modeling. It is whether the system behaves in an inherently intricate way, or not.
For the software engineering types, you have to see the "inverse problem". Imagine I have an old computer with 64k memory. It will start in some unknown state. It runs for 1 minute, ending in some other state. Your mission is to reproduce that exact configuration history at any time in any portion of your memory, no matter how many other things you do that are wrong elsewhere or in the meantime. You don't get a pentium and one minute, or five. We are sporting about it. You get a teraflop supercomputer and for running time, the lifetime of the universe. But, it is a universal computer, with an unknown rule, and you know absolutely nothing about its initial state. You therefore face the entire possibility space of a dinky old computer for a single CPU minute. And you will lose. You can barely begin to enumerate possible states because they explode exponentially. Now, second task, my wimpy old computer has to calculate the center cell of the 64 trillioneth step of a 1D CA from a simple initial condition, which as the smiling fates would have it, happens to be a simple one - say rule 250 (checkboard). No problem at all. The step number is even, all I need to know.
Computational equivalence is not relative to a task, it is a statement about a space of tasks. It is saying the task is what matters, and saying it in a particular way. (It is also about how much "software" can do, without requiring special components).
Any system following definite rules can be modeled or reconstructed with enough detailed information. About initials as well as the form of the rule, or the behavior of sub-elements, "atomic" interactions. Simple enough behaviors have the additional property that knowledge of the initials is not really required - or not in detail, or only as a parameter to which the behavior responds in a regular and reducible way.
But a system capable of universal computation is not going to have this property. Make a prediction about its behavior, without knowing the details of its initial conditions. Now I set those details, programming the system to behave contrary to your prediction. Think I can't play that game if I only have an old desktop computer? Sure I can. To limit possible behaviors, you need not only the rule but at a bare minimum constraints on the initials (if not exhaustive knowledge of them).
Is any of this only about CAs? It is not. Are the classes of behavior seen in CAs limited to that special type of system? Not at all.
The principle is about the method of modeling natural systems with simple programs, and accounting for the variety and complexity we see in natural systems with those models. It is about seeing natural systems as software, not just as special components that supposedly must do x or y.
What do you do with a natural system you want to model that you see is universal? You model it with a universal system, not a simple one. The model can behave in all sorts of complicated ways. Good - so can the real system. But the real system does some definite thing. Fine, get very empirical with it (to constrain initials, to learn the form of the rule, etc), and also try lots of guesses as to its rule and initials. Expect details to be off, and only the class of behavior seen to be right, at least at first.
Then we have the objection that "both computations are ones and zeros being manipulated by a computer using electricity through electrical circuitry and software". This is supposed to prove what? It shows that simple behaviors and complicated behaviors are both behaviors. What isn't? Where the first objector saw the distinction being made but pretends it can't be meaningful because he can imagine another distinction, this one pretends not to see the distinction simply by not making it.
If a system is behaving according to rule 250 I can predict the millioneth step in one move. If it is behaving according to rule 30 I can't. No quibble can erase that distinction.
The difference between them is not that one is a computation and the other isn't; it isn't that one is represented by 1s and 0s and the other by red green and blue, its isn't that they are manipulated by computer rather than by rocks laid out on the ground; it isn't that electricity is used or mechanics; it isn't how fast a commodore 64 determines that one million is even; it isn't that one is an internal mental process and the other an external physical one; it isn't that one is discrete and the other continuous; or one deterministic and the other an objectively possible quantum process. None of the pet important distinctions posited by others from early philosophy to recent science to previous posts in this thread, have anything to do with why and how one of them is completely different from the other, with respect to knowledge.
One of them is simple and reducible. And the other isn't. Without differing in any of the "usual suspect" ways. Complexity is real, a formal phenomenon, exists on the level of software not components, and has general laws. Deliberately missing the critical distinction or pretending subsequent ones have anything like the same importance for whether we can know something (and if so, how, what methods to try) does not change this.
PCE delimits the class in the second category, the complex rather than simple, and states a reason why all complex systems are functionally and practically alike in their intractability - to our perception and to the most elaborate analysis. It is a diagnosis of what is underlies and unifies complex systems - as such and as distinguished from all that is simple on the other hand - and makes them hard to decipher - computational universality. It is the thesis that the cause of formal intractability and apparent, subjective, phenomenal complexity is real, underlying, hard, computational universality. Things seem complicated to us when and only when they could be programmed to do anything.
That is the thesis. It might be wrong - in my mind, something said in its favor. In the sense that it is sticking its neck out and making a real claim about the world. But if it is wrong it won't be for these missed distinction sort of reasons. Maybe class 3s aren't universal, and universality is a proper subset of real complexity - perhaps even a relatively special, rare subset. Maybe about irreducibility Wolfram is right but about universality he isn't. Maybe we won't be able to tell, because it will be too hard to establish or disprove universality for some classes of complex looking systems. But it is a perfectly reasonably hypothesis with a lot to be said for it. And we will learn a lot about complexity proving it wrong, or finding it hard to prove or disprove, or finding it right - whichever.
Report this post to a moderator | IP: Logged