Wolfram Science Group
Phoenix, AZ USA
Registered: Aug 2003
The morning of the mini-course, the first question from the audience during Wolfram's talk was about the randomness in rule 30. Isn't it pseudorandomness, not randomness, since it is the same every time?
Wolfram responded briefly before continuing with the lecture. He noted that we often have the issue, that we want to use a common term in a somewhat new way for scientific purposes. That it is important to try to keep the concept reasonably close to the common sense meaning. In practice, we mean by "random" that we can't easily predict something. If we saw something like the rule 30 pattern (he had a large example slide up), in nature, we would say it looks random. Yes, of course, if we know exactly where it came from we can get it again, the same every time. When we don't know, we think it looks random.
I think there is a way to make the idea behind these comments more precise. Our intuition that something looks random can be taken as a hypothesis about an underlying cause. Or it can be taken as a minimal model we would use to predict it, when we know we can't predict it in detail because we do not understand it well enough. In both cases, the notion of randomness is relative to a state of our knowledge. But the first is essentially a theoretical projection, the second is essentially a practical expectation.
What do I mean? Well, we could see a pattern like rule 30 and guess that coins are being flipped independently to determine the color of each cell. We'd have a model of an actual causal process giving rise to the disorder we perceive - maybe each element is determined randomly. Notice, we still project a regular simplicity behind the data - a fair coin. We project an iterated use of that regularity. And this hypothesis gives us statistical expectations for the behavior, without being able to predict the details.
The practical expectation would be the same even without the causal hypothesis. It does not depend on a true fair coin being flipped repeatedly. It depends on our not knowing how the coin will land. Anything else we know as little about keeps the practical expectation.
For instance, imagine a bet between two people who have not calculated the millionth center cell of the rule 30 pattern from a single initial condition. Will either of them claim, just from knowing the rule and knowing the step number, that he can predict what the cell value will be, without calculating out each step?
Suppose someone says, "well, I know it is not random". The other then replies, "fine, then give me 2 to 1 odds that it is either black or white - you can choose which you think is more likely, I get the other one". Despite it being a determined system, this is not easy money for the guy giving the odds. In fact, they will arrive at even money as the only odds they will agree on.
Now, suppose it is rule 250 instead, which gives a checkerboard pattern. Is the result the same? No. A million is even, and that is all one needs to know to predict the color of the center cell at the millioneth step. No one will bet against another's claim to know the result, even if given 10 to 1 odds or more.
Both are deterministic systems. But our best estimate of what one of them will do, before we do the computational work necessary to determine the result exactly, is exactly the same as our estimate of the fair coin. And the fair coin, after all, is in principle also a deterministic system - just an unpredictable one.
This clarifies the continuity of meaning between "random" in common sense terms, and "random" applied to something like a cellular automaton pattern. Or the digits of pi.
I hope this is interesting.
Report this post to a moderator | IP: Logged