Wolfram Science Group
Phoenix, AZ USA
Registered: Aug 2003
One can certainly apply Popper's philosophy of science to NKS. I also think NKS suggests some amendments to that philosophy of science in a few places. And one can analyse the PCE claim in particular and ask how it might wind up being wrong, in order to determine its empirical content as it were, and to verify the fact that it "sticks its neck out" and makes real claims about the outside world.
But I don't think the idea of a single simple fundamental rule is the critical prediction of PCE. They are logically separable, in the sense that either might hold without the other. The "rule for the universe" (RU for short) claim may be the most striking that Wolfram advances in NKS, but it is not the backbone of the logical argument, nor is finding it entailed by PCE.
To see this we have to look in detail at the PCE claim, see what it is about and what it is trying to explain, and then see how that explanation might fail, empirically. Which I will do below. But first to clear away one issue that might give rise to misconceptions at some point, I want to explain the possible amendments or extensions to Popper, that I alluded to above.
There are some parts of pure NKS that are not falsifiable, any more than most of mathematics is falsifiable. That a given CA from this initial condition behaves in exactly this manner, is a formally required, logical fact. It has the status of a mathematical construction, like a geometric diagram. Before one has made the construction, the relations it makes plain might not have been apparent. But once it is made and shown (and, to be sure, checked as proofs in math have to be checked) it isn't going to be wrong under any possible empirical state of affairs.
In NKS we make far greater use of an experimentalist's methods, even in areas where the results (once found) have the above character, than has been customary (at least in recent times) in most of traditional mathematics. There are other parts of NKS that have the character of mathematical conjectures. The curious thing about formal conjectures from a Popperian point of view, is they appear unfalsifiable after the fact (since logically or formally entailed, if true), but still "stick their neck out". Because they might be wrong from the standpoint of the one making the conjecture, knowing only what he knows at the time the conjecture is made.
Popper effectively wanted to rely on something not being logically entailed, even after the fact, to separate formal from empirical truths. But a formal conjecture, before the computation (or proof) that settles the answer, has all the characteristics of an empirical hypothesis, except logical falsifiability after the fact. Popper was implicitly assuming that the logical entailments of formal premises would be simple or obvious. But that is in general not the case, and in NKS it is rarely the case.
Usually, in pure NKS, even if we know exactly how a given formal system works (its formal description, rule, code that evolves it etc), we do not know what it will actually do until we perform an elaborate formal calculation. Having the program is not having its output - between stands a mass of non-trivial calculative work. And the number and variety of systems to consider, renders it completely impractical to perform even a minute fraction of these calculations explicitly. We pick which formal experiments to perform, to learn as much as possible about other similar ones, because we know we can't perform them all. We necessarily rely on conjecture to extend the limited space of calculations we can explicitly perform.
Thus my proposed amendment to Popper, as follows. Even in formal contexts, where after the fact a definite calculation can show that some consequence necessarily follows from some set of premises, if before performing that calculation you had no way of knowing what that outcome would be, then conjectures about that outcome are meaningful science, "effectively" falsifiable. In other words, forget about any imaginary Platonic standpoint outside time in which all logical or formal truths are trivial, because we don't live there. An involved enough formal truth has a status akin to an empirical truth, and the methods used to discover them can and in practice do include the inductive guesswork, falsifiability set up, that Popper describes for empirical science.
Now, none of that means that applied NKS doesn't face the same issues of reaching the empirical world, as opposed to merely formal truths, that any other theory faces. It does. You can have a nice formal theory that has all kinds of nice formal characteristics, but happens not to be the formal relationship actually going on in empirical system X or Y. For formal to empirical correspondance, the usual Popperian machinery applies. The only quibble here is that the consequences of a complicated enough formalism may not be trivial to work out, for the reasons described previously. Formal experiments may be needed to tell you what your model predicts, because that may not be obvious simply from the construction of the model, alone.
All of that was an aside about amending Popper - in ways I hope are friendly, incidentally. Now to PCE, what it is trying to explain, and how it might wind up being wrong.
PCE is trying to explain the phenomenon of complexity. It is a thesis about the essential, underlying cause of certain things appearing complex to us, while others do not. PCE claims there is something real and essential involved, not simply detailed and contingent limitations of our thinking, formalisms, or apparatus. And it claims complexity has a single essence that marks it out, not a bunch of different causes from case to case.
More, it tell us what that cause it supposed to be. Almost all cases of empirically or formally seen complexity, are supposed to arise because the systems involved are as computationally sophisticated as anything in our universe, or because they all have the finite analog of universality (or would be universal in the strict mathematical sense, in suitably idealized extensions or limits). In short, PCE tells us universality is the cause of complexity.
Now, this could easily be wrong. It might be the case that most complicated systems lack sufficient computational flexibility to be capable of universal computation. I can advance a plausible alternate thesis, to distinguish PCE and how much it is claiming. NKS has shown us the category of computational irreducibility - when there is no way of compressing a computation, significantly more efficient that emulating it step by step (up to fixed factor translation,1 step for 2 e.g. or one-off transforms of the data etc). We can see, logically, that universality entails irreducibility in at least some portions of the cases, or some possible evolutions of a given universal system. But the reverse is not the case. Irreducibility need not entail universality.
So, suppose it turns out that the cause of apparently complexity is that most complex looking systems are indeed computationally irreducible - but that they are below the threshold of universality, and not capable of supporting universal computation. Then NKS would still have lots to say to us about such systems. Many of its methodological innovations stem from irreducibility - the need to simulate e.g. But PCE as stated, would be wrong. Only some limited subset of complex looking systems would be universal. Universality would be a special case inside a larger category of irreducibles, and that special case might - empirically - be rare and unusual rather than everywhere and normal.
PCE is sticking its neck out, even compared to the claim that almost all apparently complex systems are computationally irreducible. It is a distinctly stronger claim than that, one that could much more easily be wrong.
Even a weaker "complexity is caused by irreducibility" claim might be false, empirically. We can find some formal systems that can be fully analyzed (reduced to a short formula, "compiled", short-cut) that on the surface look complicated, with the full analysis not obvious but in the end possible. I call those "cracks", as in cracking codes or safes. A given apparently complicated system might not be irreducible. It might be "crackable", instead.
Thus a conjecture that most empirically seen cases of complexity will prove computationally irreducible, is a falsifiable claim. It is possible, instead, that most empirical cases of complexity have "cracks" that we just haven't found yet. I don't think so. I think "cracks" are special cases, arising on the borderline between simple behaviors and irreducible ones. But it is logically possible most cases of complexity have short formula "cracks", i.e. are reducible, and it can be shown for this case or that, by finding the crack and presenting it.
That covers a number of ways PCE could wind up being wrong. Next I'll show that PCE and RU are separable. First, PCE might be true and RU false. How? Clearly from the first we know universal computations happen in nature and so the laws of the universe must support them. But nothing in that alone says, they must be compressible to a single simple rule. The laws of nature might instead be a relatively complicated rule with many subclauses. That would not invalidate PCE. We know of complicated formal rule systems with many subclauses that are capable of universal computation - higher level computer languages, for instance. Suppose the rules of the universe are like a million lines of complicated code, capable of supporting universal computation. Then PCE might hold and RU fail. RU is adding something additional, in this case.
Second, RU might hold and PCE might fail. How? If RU holds, there is a simple rule that gives rises to all the laws of physics. Fine. We can readily assume the RU in question is fully capable fo supporting universal computation. Universal computation occurs in the universe. No problem. But most instances of complexity around us, might be only irreducible, and not universal. Nothing says every subsystem must include all the computational capacity of the full RU.
A system capable of universal computation can emulate any finite algorithm. Simple ones as well as complex ones. Maybe almost all the apparently complicated systems we see, are all simple but well-hidden "cracks", in terms of the RU code. Just as a universal computer can do something simple like add 2 and 2, it can do something reducible but not easily analyzed from the results alone, like evolve rule 90 for a number of steps in the set 2^n, n an integer.
Here is another way RU might fail while PCE holds. It might be the case that QM (or QFT) is not an abstraction or idealization of some underlying generating system, but simply true at some base level. The most primitive rules of the universe might not even be single-valued aka deterministic, let alone a simple deterministic rule. But it might still be the case that almost all the aggregated systems we see that exhibit complexity, are capable of supporting universal computation. NKS would be wrong about fundamental physics but right about the basic cause of complexity.
So either RU or PCE might hold without the other holding. Ergo, RU holding or not cannot be the crucial test of PCE. Instead, PCE stands or falls based on whether complex systems on the one hand have simple, reducible explanations, or irreducible ones not past the threshold of universality, or on the other hand are systems capable of supporting universality (with suitable idealizations about limits etc), or are as close to it as any system implimentable in our universe (to cover finite nature cases e.g).
Excellent question by the way. It is a topic I regularly cover in philosophy talks at the NKS summer program. I hope this is useful.
Report this post to a moderator | IP: Logged