Wolfram Science Group
Phoenix, AZ USA
Registered: Aug 2003
I've moved this thread here because it strikes me as much more of an NKS way of thinking question than a pure investigation of the properties of simple programs question.
I think there is some confusion, or at least potential for it on the part of some readers, between the idea of a computer and the idea of a CPU. A CPU is part of a particular design for practical computers that we call the von Neumann architecture. (It is also, incidentally, not a physical object made out of carefully arranged sand, but anything that performs a certain functional role in such a set up). It is not a necessary part of the abstract nature of computation. It is an allowed and often convenient extra, a functional subsystem distinguished from other aspects of a computation.
Abstractly considered, a computation is a transformation operating on information. Information goes in, information comes out, differently. The difference can depend on the information or on any portion of it. But one set of rules must decide what happens to any variety of information given - that is the underlying regularity that makes it a computation.
(Note, information is not yet computation. Any number can be recorded on an abacus, but an abacus by itself is only a memory, not a computer. A memory is just an arrangement, what is arranged is a matter of indifference, electrons, beads, grains of sand... A set of rules for sliding beads back and forth on an abacus, on the other hand, may constitute a computer. Computers do not need to be made from electronics, that is just very convenient for handling large arrangements of delicate parts in a robust way, with minimal effort.)
In the von Neumann architecture, a portion of the information in the input is considered "instructions" and another portion is considered "data". But they are instantiated in precisely the same way in memory. A definite instruction set operates on the instruction portion of the input to determine the sequence of operations to be performed on the memory portion of the input. (When this division is relaxed, and the results of operations on the memory portion can change the instruction portion of the input during run-time, we speak of “dynamic programming”).
A computer is general purpose if it can be made to implement any finite algorithm by changing what is fed into it. In the von Neumann architecture, by changing only the instruction portion of the input, we get the general purpose computer to perform any computable transformation on the rest of the data.
This exploits a finite version of the principle of universality. It is not necessary to have a different underlying rule for each sort of computation we want to perform. We can instead fix an instruction set (a BIOS for a contemporary computer e.g.), and then arrange sequences composed out of that fixed instruction set so as to mimic the behavior of the overall algorithm we want, for this particular run.
(Notice I say, "a finite version of...". The strict mathematical sense of universality depends on countably infinite sets, and no actual computer has infinite memory or has run for an infinite time. But as we see in practice, real world computation depends on the underlying flexibility of the system and not on its cardinality. Universality in the strict mathematical sense describes what e.g. a Pentium chip might be able to do with infinite memory or running for an infinite run time. When the answer is “any computable algorithm”, we find in practice that we have a general purpose computer. Even though the exact logical conditions of universality have not, sensu stricto, been met).
All that is necessary is for the limited instruction set to have sufficient internal richness to support emulation of an arbitrary computation. That is, the instruction set needs to be past the threshold of universality. Once it is, we do not in principle need to make it any more complicated, nor do we need to alter it to perform a different computation. We can instead leave the instruction set fixed, and modify the data passed to it, to perform a different sequence of those instructions (longer, looped, whatever).
When computation was being discovered, systems were deliberately designed to have universality as a property. But Turing is deservedly famous and so are his machines, because he proved that a system with a very simple underlying structure was already sufficient for this. Previous results in mathematics had already shown that any finite algorithm could be encoded as a problem of arithmetic. And Turing showed that precisely those problems that could be solved there, could be solved by one of his simple machines (idealizations, in fact, of the process whereby human beings did arithmetic).
In Turing’s day, “computer” did not mean “box on a desk that performs general algorithms”. It was the job description of a human being who added, subtracted, multiplied, and divided numbers. We can encode any arrangement as a number, both an input and an ouput. Any computation can then be specified as a certain transformation between numbers.
Now, after NKS, we know that systems that nobody ever designed to be able to perform universal computations can in fact perform universal computations, and that this is true even of remarkably simple systems. So simple they might readily arise physically in everyday systems, all around us. That, Wolfram conjectures, is the underlying cause of apparent complexity, the unifying aspect or marker that we notice as complexity. Some systems have sufficient internal richness that they are as programmable (capable of behaving in different ways when fed different inputs) as any artificial system that was meant to be.
The claim about complexity and the universe is then that its ongoing evolution in time corresponds to a computation of arbitrary sophistication. From prior state to next state there is some simple transformation. But a repeated sequence of simple transformations can do all sorts of complicated things.
Any given configuration of the universe then corresponds to a state of data or “memory”, if one wants to pursue the analogy. With no internal division between an instruction portion and a data portion, and no use made of the von Neumann scheme of functional differences imposed over memory, for the conceptual convenience of a human programmer. What corresponds to the underlying set of possible transformations, (which can then be concatenated into any finite algorithm)? The laws of physics, or rather, any transformation the laws of physics entail.
When Wolfram speaks of looking for the rule that might define the universe, he means finding a functional form that specifies those entailed transformations, in some appropriate, underlying representation of real states. Memory in is state of the universe at time t0. Memory out is state of the universe at time t1. Rule of the universe aka laws of physics are the transformation, r: M(t0) -> M(t1). The trick of course is to find a simple r that can still give rise to the great complexity we see all around us.
I hope this helps.
Report this post to a moderator | IP: Logged