wolframscience.com

A New Kind of Science: The NKS Forum : Powered by vBulletin version 2.3.0 A New Kind of Science: The NKS Forum > NKS Way of Thinking > Determinism and Randomness
  Last Thread   Next Thread
Author
Thread Post New Thread    Post A Reply
Jason Cawley
Wolfram Science Group
Phoenix, AZ USA

Registered: Aug 2003
Posts: 712

Determinism and Randomness

In another thread, Richard Gaylord raised a question about the NKS preference for determinism in models rather than randomness. He called it a philosophical stance, and asked what was simpler about it.

It is a fair question. Here is my answer. The following is much broader than Richard's question and directed at other schools of thought than I would put him in - so Richard bear with me if much of the following seems to be addressed to somebody else. I hope I address your own position along the way.

Yes, it is philosophically possible there are real underlying random behaviors in the world. When modeling any given system (less than the world) it is also possible that complex behavior seen may be due to continual "vertical" inputs from outside that particular system, buffeting it in ways uncorrelated with the rest of its internal states. NKS calls this "continual injection of randomness" or "randomness from the environment". If this is what is actually going on, then an appropriate model will either use randomness, or extend the boundaries of the modeled system to include some of the "noise" sources.

When a Mathematica program for a random model calls Random, it is using rule 30 to generate an uncorrelated bit, and then injecting it into the behavior of the modeled system. It is as though the overall model is a lot of rule 30s sitting on top, dropping bit by bit signals to the rest. This counts as "random" rather than "deterministic" simply because the rule 30s aren't considered part of the model and what they produce is taken in uncorrelated snippets. Similarly, randomness is sometimes characterized as the result of "lots of coin tosses" - but nobody expects those coin tosses to be modeled as digit extraction from an ensemble of detailed initial velocities provided to a rapidly rotating disk on a parabolic arc within Newtonian gravity.

We look at the typical result of either procedure. We reduce it to a probability distribution. Any other process with the same probability characteristics, we treat as functionally equivalent to the original procedure. We don't care whether one generated randomness instrinsically, or another got it from details of initial conditions, or a third got it from buffetings from an external environment. All we care about is whether p is 0.5 and run data are distributed normally and the like. And that the procedure is orthogonal to the rest of our modeling.

All of that just addresses potential confusions about where randomness comes from or what counts as it in a model. It does not address whether a model should put in randomness explicitly at every step, put in randomness only once in initial conditions, or not deliberately put in randomness at all and let it emerge from instrinsic complex interactions, only.

If you put in randomness at every step, runs differ from one another every time. Effectively you have a larger space of initials, now generalized to boundary conditions, with some of the boundary not being back at time 0 but "arriving" later. The reason NKS minimizes this is that in general it makes it harder to see what is causing a given complex phenomenon.

If you've got one complicated process you can tell where complexity seen came from. If you have a process with no complexity at all (e.g. a 1D random walk), you can attribute any complexity seen to the part you deliberately made random. But if you've got 4 types of random variables interacting in complicated ways in a 100 line program, you have no idea what is causing some complicated outcome seen in the results. The principle is similar to varying only one thing at a time in "sensitivity analysis". If you vary n^4*t things at once, you get no idea what is causing what. If you see simplicity you don't know if it is because of some regular relationship or the central limit theorem.

It can also just make the possibility space computationally harder to "span", to search exhaustively rather than just sampling.

NKS has noticed that these modeling approaches are not necessary to produce complex results, that deterministic rules are sufficient to produce complex results. Simple rules are also sufficient, rather than complicated ones. That does not mean that every place complexity appears, the underlying causes are deterministic and simple rules, just iterated and interacting. It does mean this cannot be ruled out, just by the complexity of the behavior seen.

Since science is a search for rules, we look for rules where they might be present. We scan for them. Since science is the art of systematic oversimplification - the art of noticing what we may with advantage omit - we look for simple rules rather than complicated ones, if simple ones suffice. Since science proceeds by specialization and analysis, where it can, we look inside the subsystem we are examining for the causes of its behavior, rather than outside of it, when possible. Since science seeks reproducible regularity, we look for deterministic rules within subsystems - causes - rather than connections, interactions, and correlations.

We will not always find these things, even if we look for them. Worse, sometimes we may find them when they aren't there. Science is guesswork, and our guesses can be wrong. We deal with that simply by iterating on our guesses and paying attention to the actual phenomena, as open as possible to what data can tell us.

But what we do not do is notice that everything is connected to everything else, and some things are random, and some things are sensitively connected to other things, and conclude that anything could happen and might be correlated with anything else, and rehabilitate astrology. Nor do we conclude that we can't know anything unless we know everything, which we don't. We don't do these things or draw such conclusions because they block productive inquiry. They are ways the mind tells itself to shut itself off. Peirce's first rule of reasoning is "do not block the way of inquiry".

Now, the hypothesis that practically everything we might be interested in is actually random, blocks inquiry. It digests the central limit theorem perhaps, and keeps the insurance companies and casinos in business as a result, but then halts. Philosophically, it is a real possibility in any given case, and we should keep it in mind. When we can't find any sensible deterministic model, we can seriously entertain the possibility that there simply isn't one.

But this should be a philosophic assessment of a settled state of persistent failure, and not a cosy armchair where we vacation from the sweaty work of model building. Reasonable people may seriously believe that QM is just plain random, all the way down. And practical people may notice that as yet, deterministic models of e.g. price series, are no better than random ones. But certainly in the latter case it is not because all plausible deterministic modeling possibilities have been tried and have failed. It is more in the nature of the armchair in that case.

(There are some economic theorems about limited possible persistence of widely known deterministic models of prices, to be sure. But that is a much more limited thing than e.g. QM).

We should look for deterministic models because finding the causes of things is what science is about. We should not be dogmatic about it always being possible to find them. Actually, I will weaken that statement. If you want to be dogmatic about it, go right ahead, it will not harm anything. Just keep your eyes open and notice if you aren't succeeding at it in a given line of inquiry.

If you find random models interesting, on the other hand, fine, play around with them. Personally, I'd suggest keeping them very simple and when possible having randomness come in to your model in only one way and at only one place or step - for the sensitivity analysis reasons mentioned above. And you can entertain the philosophical view that underlying things are really random if it appeals to you.

But don't try to tell deterministic modelers they can't possibly model X. If they model X they model X; they don't need prior authorization in the matter. Do not block the way of inquiry with prior dictates about what others can possibly know. Humility about knowledge can be reasonable enough when kept to one's own claims. When turned into a commodity for export it loses its virtue.

Report this post to a moderator | IP: Logged

Old Post 08-03-2004 03:35 PM
Jason Cawley is offline Click Here to See the Profile for Jason Cawley Click here to Send Jason Cawley a Private Message Edit/Delete Message Reply w/Quote
Post New Thread    Post A Reply
  Last Thread   Next Thread
Show Printable Version | Email this Page | Subscribe to this Thread


 

wolframscience.com  |  wolfram atlas  |  NKS online  |  Wolfram|Alpha  |  Wolfram Science Summer School  |  web resources  |  contact us

Forum Sponsored by Wolfram Research

© 2004-14 Wolfram Research, Inc. | Powered by vBulletin 2.3.0 © 2000-2002 Jelsoft Enterprises, Ltd. | Disclaimer | Archives