Wolfram Science Group
Phoenix, AZ USA
Registered: Aug 2003
One quick reply, one slow one.
There are plenty of rules or system types that are clearly simple, not only at the level of their rules or the set up, but also in their ultimate behavior. You won't ever find any complexity in the function that returns 0 for all inputs. Or in its CA analog, rule 0. In no reasonable sense is the successor function, which returns its input plus 1, complicated - even though it has an infinite number of possible outputs, and what it returns is different for each of an infinite number of inputs. So clearly, fundamentally simple rules exist. Going a bit farther, no 2 state, 2 color Turing Machine is ever going to do anything interesting.
And universal rules are clearly not like that. It is not a matter of terms or usage, but a real formal difference.
When we say that simple programs produce complicated results, it does not mean that we secretly smuggled in the complicated result part and "really" they are complicated rules. I understand there is a traditional intuition from engineering, that we only get complexity out if we deliberately put it in. But this intuition is simply wrong. Fundamentally, hopelessly, empirically wrong. Intuition is a wonderful thing and often a useful guide, but sometimes it is wrong. And intuition dies hard.
If you write a C program a million lines long, you aren't surprised if it does complicated things. It has enourmous internal resources, ways to vary what it does in special case after special case, elaborate chains of things to do, branches to go down or avoid, all connected back to each other in a spaghetti-like network. If you look at the circuit design for a CPU, you will see a hardware version of similar complexity.
But nothing remotely like that kind of complex, specialized, designed set up, is necessary to produce strictly arbitrarily complex behavior. No, it is not being smuggled in from the overall system set up, say the definition or lay out of a CA - the set up is nearly irrelevant and can be varied as you please, and the same phenomenon will crop up again.
The relationship between intricacy in at the system definition stage, and complexity out at the behavior stage, is not linear. It is a step function, a threshold effect. Clearly the million line C program and the CPU are over on the "complex as you please" side. But they have company there. And you get there a lot faster than people supposed in the past.
On the constant function or rule 0 side, you can find rules so simple they can never do anything complicated. Slightly more internal resources, and you can get a couple more varieties of behavior, but really only a couple more. Those are then seen over and over in simple system after simple system. Constants, shifts, periodic behaviors, nesting. In set up after set up, those and no more. When on the near side of the system resources divide.
And then you pass a threshold, and you instead see great internal complexity. You don't have to have more colors than rule 0 has. You don't have to have a different set up. The specific difference between say a rule 250 (which makes a growing checkerboard pattern) and rule 30 or 110, is not smuggled in to different details of the set up. They do not have different status under the usual suspects - designed or not, huge specifications or short ones, deterministic or not, purely formal or empirical, etc. All those attempted "cuts" put them in the same bin. But one lot do fundamentally simple things, and the other do not.
Consider a case like the 3n+1 problem - where it may be the eventual behavior is always "simple" in a specific sense, but where it at least so far seems arbitrarily hard to prove that is always the case. Given any number, if odd multiple by 3 and add 1, if even, divide by 2. Repeat. It is easy to see that 1 goes to 4, then 2 then 1, and cycles. It is not known whether all numbers land on that same cycle or not. 3 does, by way of 5 which does, by way of 16 and 8. If an odd does, then so does the even that is its double. 7 does, by way of 11 and 17 and 13. Etc.
But it is some computational work to verify of each number, that it eventually lands on the 4-2-1 "attractor". Is this complexity somehow smuggled in to the set up? Does it exploit lots of cells, or parallelism? It is already there in fundamentally simple questions in number theory.
We can notice two critical components of the 3n+1 example that may indeed be needed. It has conditionality - it does something different for evens and for odds. And it has the notion of doing something over and over again, that one word "repeat". That appears to be about all one can say is really required. And it is precious little.
But 3n+1 may be a borderline case, because somebody might prove a theorem that all numbers end on the same cycle. Would that make the 3n+1 example a fundamentally simple behavior? It isn't fundamentally simple to prove that theorem, at any rate, or somebody would have done it by now.
Now, we can sort rules by the behavior they show after we calculate what that behavior will be. Then we see some on the simplicity side of the divide, and some on the complex side. Or we can sort rules by the variety of elements or allowed interrelations we let them have, internally. Then we find if those are very, very low, we get only simple behaviors, but if they rise, at all appreciably, past constant functions or a couple of allowed internal states, and have anything that can serve as conditionality and any way of doing things over and over, then some of them will have arbitrarily complicated behavior. Without any more resources or any different set up, from other instances just like them that always give simple behavior. Just different details.
Below the very low threshold, details do not matter. The behavior is simple. Above it, the details matter. Some behave simply, many behave with arbitrary complexity. Not more and more complex as internal resources are increased. Right now, bam, all you will ever get. Once the details matter, you can't tell from the set up whether the behavior will be complicated. You have to actually perform the computational work, and see.
That is why a fixed instruction set can let a CPU implement any million line C program you care to write, and get exactly the behavior you intended - if you write it correctly, at least! (The same formal dependence on details will make it hard to predict the effect of any bug you happen to introduce). And that instruction set could be vastly simpler and the same would still be true. "Compilers" would link any of the simple universal systems discovered in NKS or before it by others, and typical real instruction sets from engineering. (Which are no doubt easier to work with, but not capable of anything fundamentally more involved, on the behavior side).
Programmable complexity is easy to get. It does not need to be smuggled in. Nobody designed rule 110 to be universal, it was simply found by search, in one of the smallest spaces of simple programs. Programmable complexity - and its knowledge limiting consequence, formal undecidability - appear where nobody put them, without anyone designing or intending it. We discover it like a law of mathematics, we do not have to make it. And it crops up about three steps into the world of simple programs, just beyond the door.
I hope this helps.
Report this post to a moderator | IP: Logged