A New Kind of Science: The NKS Forum > NKS Way of Thinking > incompleteness in an NKS model of the universe
Author
Chris Smeenk

Registered: Feb 2006
Posts: 8

incompleteness in an NKS model of the universe

hello,

Stephen Wolfram has emphasized that the physical universe including space itself, time, the laws of nature, and the properties of fundamental particles may be comprehensible through the evolution of a simple system with simple initial conditions. As I understand it, he feels that both the rule and initial conditions may be discoverable through exhaustive search of NKS type systems.

My question concerns how incompleteness results might apply to an NKS model of the universe. Certainly, the model must be a universal computer (since the universe contains universal computers). I think the model would necessarily be consistent too (i.e. it would be impossible to derive something analogous to: the electron rest mass is 512 keV and the electron rest mass is 665 GeV). (sidenote - kiloelectron volts and gigaelectron volts are units of energy used in atomic and nuclear physics.)

Under these two assumptions (i.e. the NKS system for the universe is suitably powerful – “computationally irreducible” in NKS jargon – , and it is consistent) would it follow that the model of the universe is incomplete? I think it would.

This means there will be true properties about the physical universe that are not derivable within the evolution of the NKS model for the universe. There will be things about the universe – perhaps some of the laws of physics – that are true but not discoverable via the evolution of a simple rule with simple initial conditions.

I am interested to hear thoughts on this topic.

Report this post to a moderator | IP: Logged

08-30-2006 03:30 PM
Jason Cawley
Wolfram Science Group
Phoenix, AZ USA

Registered: Aug 2003
Posts: 712

An evolution rule is not a constraint system.

Axiom systems are multiway systems rather than single path deterministic systems. They specify a set of allowed operations or equivalences, without specifying which is applied or in what order. All combinations of applications of any of the axioms in any order any number of times, then constitute the possible theorems of the system.

With something like a deterministic CA, on the other hand, the rule and the prior state fully specify the subsequent state. There is no choice or variety in what to apply next.

One can still raise undecidable questions about single track, deterministic systems, like whether configuration A can ever be reached from any possible initial condition (sometimes readily decidable, sometimes not). One can also raise questions that are soluable in principle but not so in practice, by a feasible calculation - that simply require too many logical operations in sequence to perform them all, even with computer assistance.

Many general questions about deterministic systems are pushed into the undecidable category by leaving freedom in the initial condition - one is effectively asking something about a whole ensemble of possible evolutions. Since the initials can be put into one to one correspondance with the integers, there are countable infinity of them. Questions about members of infinite sets of possibles readily raise undecidability questions.

But if one has only a single rule, not a class of them - and only a single initial, not a class of them - and that rule is also deterministic ("one track") - then the evolution is not multiway in any respect. It hits what it hits and it does not hit what it does not hit. It may be hard to determine that set by stepping through the evolution one move at a time, because it involves irreducible calculative work. But that is distinct from the issue of derivability proper in a multiway system or axiom system, which is asking about a whole set of possible paths, not deterministically evolving a single one.

Imagine there is a "possible" configuration Q of the universe that is not actually reached by its evolution. One might prove a whole mess of theorems about the rule of the universe, and find that that whole set of theorems fails to exclude Q. But if Q is simply not reached by the actual evolution from the actual initial, it isn't a missing theorem or an underivable truth, it is simply an unreality that isn't covered by the net of theorems you've been able to construct. What's "possible" about it, then?

There is no notion of truth involved beyond "actually hit by the actual evolution of the rule". Constraint systems or axiom systems want to posit, "all statements consistent with these others are true". Single track rules simply do not, they are not talking about such sets of statements. What would completeness even mean for a single track rule?

One might turn it around epistemologically speaking, and ask whether one would be able to verify that initial condition "a" and only initial condition "a", in rule "A" and only in rule "A", could have led to observable "b" - and there one might readily encounter the usual undecidability issues. But what "A(a)" does actually evolve to, is mechanically determined on a single rail track.

As for the universal systems the universe contains, if the NKS view is correct, the strict mathematical sense of universality applies only to mathematical abstractions, to a possibility space, and not to actuality. Actuality is "measure epsilon" in possibility space - some one definite path of things happens. The "possibilities" that don't actually happen are possibilities by curtesy, so to speak. (Really, our notion of possible refers to multiple cases and requires imagining some things fixed and others not - it is sensitivity analysis at the level of thought experiments).

Report this post to a moderator | IP: Logged

08-30-2006 09:15 PM
Tommaso Bolognesi
ISTI-CNR (Nat. Research Council)
Pisa

Registered: Jan 2005
Posts: 15

incompleteness and termination -- in the model and in the universe

The question above, and Jason's answer (which I find quite satisfactory) have triggered in my mind a couple of thoughts that I would like to share.

Let us indeed assume that our universe is the computation of a discrete, deterministic, universal machine, that was started with a precise, and (hopefully) very simple initial condition.

Let's first assume it is a universal Turing machine.
Then, when wondering about incompleteness, one cannot avoid thinking also about the associated aspect of termination: we know that any universal computer *must* exhibit, for some initial condition, some nonterminating computations.
Let us then hope that the initial condition for the
specific computation in which we live has been a good one, implying that our history will run forever!

But what about the termination conditions usually associated with Turing machines? Adding a special, conventional final state to a Turing machine is in a sense artificial. No doubt it is necessary if one wants to compute functions in the usual way: take an input, let the machine run, and look at the tape only when the game is over (if ever), to get the output.
But the laws of physics, and the beauty of nature, are not captured by this input/ output relation; they rather emerge by considering the computation in its detailed, step-by-step, internal evolution. Under this view, what would be the purpose of a special conventional terminal state, one at which everything freezes forever? (unless this is the state immediately preceding the fall of an atomic bomb on my house).

Likewise, should we postulate that, during the evolution of this universe, Someone is checking the position of the 'Head of The Control Unit' on 'The Tape", ready to stop everything whenever it gets further to the right than its initial condition? (this is another commonly used terminating condition). This sounds really a bit artificial.

If we insist thinking at the universe in terms of Turing machine computations, I would just drop these standard but artificial termination conditions.
I would always fill the transition table completely, so that there's always a next state. No computation would ever terminate (and no function could be computed in the traditional input/output sense), but 'sad endings' would still be possible: the evolution may get stuck into a constant configuration -- a fixed point -- or a cycle. So, having ruled out the risk of a static deadlock, we should just hope that our initial condition is good enough to avoid a dynamic one, that is, a final cycle of period one or larger.
(Note: this requires a revised notion of universality; see e.g. Sutner's recent work.)

With (universal) Cellular Automata things are in a sense simpler: usually in this model one does not care about *defining* a special terminal state -- a peculiar configuration of the cell array -- so the only type of sad ending we should fear for our Universe is, again, the dynamic deadlock.

__________________
========================================================================
Tommaso Bolognesi
Formal Methods && Tools Lab.
CNR - ISTI - Pisa - Italy
e-mail: t.bolognesi@iei.pi.cnr.it
web: http://matrix.iei.pi.cnr.it/FMT/

Report this post to a moderator | IP: Logged

09-01-2006 04:45 PM
Colin Hales

Registered: Sep 2006
Posts: 1

incompleteness in an NKS model of the universe

Question:
I have a question for the forum: has anyone else other then me seen the blizzard of Godellian unproven truths within every CA?

In relation to incompleteness..... well my question says all. A cellular automata is literally bristling with 'virtual theorems'. (unproven truths) ...Far more 'truths' than proven by the cell computations. Indeed the number of unproven truths is proportional to the factorial of the number of cells. There are two versions to consider: a computational version and a formal version. The computational version is easier to see but technically irrelevant to physics. The formal version is relevant to (becomes a) physics.

I am in the process of writing up this new class of Godellian unproven truths for journal submission and would like to know of anyone who has been thinking along these lines. Or if anyone has links to other forums where this kind of thinking has been going on.

regards,

Colin Hales

Report this post to a moderator | IP: Logged

09-26-2006 04:30 AM
Tommaso Bolognesi
ISTI-CNR (Nat. Research Council)
Pisa

Registered: Jan 2005
Posts: 15

Does this all boil down to saying that, given an ECA (or any other simple rule) and an initial condition, there are some cell configurations that will be reached, and some that will not?

I find it hard to say much more than this about an individual, deterministic ECA computation. I see an ECA computation just as a single, deterministic run that computes ... itself, not as something set up to prove or disprove theorems. I cannot even understand what 'truth' means for a single, deterministic computation (other than elementhood for a cell configuration, as mentioned above).
Of course, IF you take the rule and allow yourself to play with it, to run it on ANY initial condition you like, AND you have a way to interpret initial conditions and perhaps final conditions (they CODE some configuration of another formal system you have in mind), then you can use the rule to prove things, and to enumerate truths. But all this requires external work, human interpretation...

On the other hand, all these human brain activities, involving proofs and the idea of 'truth', are phenomena that take place in the universe itself, and that should be governed exactly by that same universal deterministic rule.

How can we then reconcile the single track nature of the universal computation with the multiway nature of proof systems and proof activities that go on in the human brain?

This reminds me of the notion of ergodicity, which I understand in tems of dolphins: take SEVERAL snapshots, at different times t1, ..., tn, of a SINGLE dolphin d jumping in and out the ocean, and then take a SINGLE snapshot, at time t, of a group of SEVERAL dolphins d1, ..., dn (you'd see just the face of one, just the tail of another, and so on): the information you get about the 'dolphin process' is the same in the two cases -- in a sense, you trade space for time.

Perhaps the universal computation is single-track, like a single dolphin, but it is 'ergodic'-like, so that it also exhibits, although displaced in time, the features of a multiway system.

But how about the 'Goedellian unproven truths'?
What can we say about the process that takes place in the human mind and leads to the conclusion that something unprovable is indeed true?
Did this process follow the Rule?
(I guess this topic must have been covered a few times in the Forum...)

Tommaso

Report this post to a moderator | IP: Logged

09-26-2006 09:00 AM
Simon85

Registered: Sep 2006
Posts: 13

Assuming NKS is right and the universe is a universal turning machine, what from a philosophical standpoint do we base morality on?

Also CA cannot be the basis of the universe, as NKS seems to try theory proposed is based on Computational Equivallence, which is not well defined, and even if it were can one claim it holds with what we see around us when what we see is not a lot of systems that work in parallel, but systems built one on top of the other, that collectively create the complexity of what we see.

No I may be wrong CA won't do that, at least not from what I have seen.

-Simon

Report this post to a moderator | IP: Logged

09-26-2006 07:34 PM

wolframscience.com  |  wolfram atlas  |  NKS online  |  web resources  |  contact us

Forum Sponsored by Wolfram Research

© 2004-13 Wolfram Research, Inc. | Powered by vBulletin 2.3.0 © 2000-2002 Jelsoft Enterprises, Ltd. | Disclaimer | Archives