Wolfram Science Group
Phoenix, AZ USA
Registered: Aug 2003
"a system is programmable if you can encode it -or program it- with some input or initial condition in such a way that you get an output that can be decoded in the same way."
Well, the encoding scheme and the decoding scheme need not be the same as each other. But yes they can each be fixed for all the initial conditions, not rejiggered for each potential calculation. And then the space of initials of the underlying system impliment precisely the computable functions - in the infinite memory limit, anyway. (Some calculations in some encodings and rules, may require very large initials, etc). One can also require that the encoding and decoding schemes always halt, to prevent them from doing all the real work for you.
"given that a system is universal, is there always a way we can encode an input so that it can perform a certain computation?"
Yes, in principle. Where it is in the enumeration of initial conditions for the system and encoding you've picked, is another matter. And therefore how fast or efficient the computation will be in that system as opposed to some other.
A mapping that takes you from a function in a baseline universal system, typically one we understand well intuitively (e.g. arithmetic), to a translated construction (encoder, initial, decoder) in another system, is in effect a compiler. If the emulated system is close the compiler may be quite simple. Even if it isn't obviously close, but is abstract and dessicated enough, a quite simple compiler will often suffice.
Matthew Szudzick has given several examples in summer program lectures, e.g. a compiler for s-k combinators that shows how they can impliment any function in arithmetic. Tommaso Bolognesi at NKS SS 2005 showed a compiler for process algebras (an operator language akin to expression trees). It is often easier to show a direct emulation scheme for a closely related system e.g. the construction on page 658 that shows how a CA with plenty of allowed colors emulates any TM, using sets of colors to represent head positions and states, with a modded color value giving the state of the tape under the head. Page 786 shows a single integer equation in arthimetic (a diophantine equation) that encodes the behavior of rule 110. Any question about the behavior of rule 110 can be translated into a question about solutions to that equation. All of these are general translation schemes.
"the initial condition in rule 110 could, in principle, be encoded in different ways so that it could perform different kinds of computations."
That you can encode any computation in rule 110 is what the proof of 110's universality establishes. It gives a way of encoding states in cyclic tag systems in configurations in rule 110, using the various backgrounds and gliders naturally occurring in 110 as information elements. So that is effectively a compiler for cyclic tag computations to be executed by a rule 110 processor. Cyclic tag systems can in turn emulate arbitrary TMs according to a different scheme. So, given a TM to emulate, one can run it through a cyclic tag emulation scheme and get a cyclic tag system and initials set up that emulates that TM. Then one can compile that cyclic tag set up into a rule 110 set up. And then run rule 110, read out its output according to the decoding scheme the construction provides, and get an answer to the cyclic tag system's behavior from those initials. One can then translate that answer according to the decoding scheme of the cyclic tag system emulation of the TM, and recover a TM output from the initials specified.
Naturally, you'd never want to (lol). It would be hopelessly inefficient for practical purposes. But the space of computations reached is formally the same, since the possible computations can thus be put into one to one correspondance. That is the theoretical interest in the result. Also, there are plenty of theorems about the possible behaviors of universal systems which immediately "go through" as soon as one has established such a correspondance.
In a more practical context, you can think about writing functional computer programs in different higher level languages. You can write a program that behaves the same way - in an input to output pair sense - in lots of different ways, and in any higher level language. And you can write compilers for higher level languages that enable them to run on different machines, with different base instruction sets, etc. It is a similar point about behavior-preserving formal flexibility. Which is a threshold - once you are over it, the space of possible computations that can be reached (in the infinite memory, or infinite initial condition size limit, to be sure) is the same.
And the NKS point is that we can find instances of systems for which that is all true, that are so simple we can readily imagine even natural systems not designed for it, being past that flexibility threshold. It has been proven of some simple class 4s, and the relative ease of constructing new chains of emulation among 4s with more internal resources (more colors or longer range e.g.) leads Wolfram to expect, the 4s in general are all over this flexibility threshold. That has not been proven, but it is widely believed. More controversial and harder to see how to establish in any given case, is Wolfram's stronger conjecture, that even class 3s are over this threshold.
"does universality imply a halting condition?"
In general no. Decoding schemes have to be allowed to treat some states as answers, even if e.g. they contain patterns that remain but shift to the left forever, say. A decoding scheme itself can't run forever (i.e. it can't take forever to decide what the system it is supposedly decoding, is "saying" at time or step t), but it can return the answer "hasn't halted" about the system it is decoding. As for the behavior of the universal system itself, it cannot give a definite answer in finite time under a fixed decoding scheme for all inputs, or it would not be able to emulate e.g. a TM that fails to halt, so it would not be universal. All universal systems can (for some inputs, or "potentially") get into loops that might never end, that might depend on the outcome of some elaborate and incompressible sub-calculation, etc.
Fine questions by the way.
Report this post to a moderator | IP: Logged