Wolfram Science Group
Phoenix, AZ USA
Registered: Aug 2003
Several specific idealizations need to be combined to reach a set that is not computable by a deterministic machine. First I'll chop off several simpler cases that often confuse people about the real issue, showing that it is not determinism vs. multiple possible outputs, nor a closed system vs. one in continuous contact with an external environment (another formulation) that are essential features, but a cardinality issue that is much more difficult to reach in a realistic example.
Suppose I make a multi-way CA that chooses at each step whether to act like 0 or like 1. It now has two initial conditions - the ordinary one n bits long, and another with one bit per step, which "calls the coin flips" as you run through the steps. (In Mathematica terms, a FoldList reads in one bit per step from the second initial). If I wanted to, I could then pair up my two initial conditions and enumerate all of them (with a pairing function e.g.). This is formally equivalent to a non-deterministic system that generates the new bit ex nihilo from step to step. But it is deterministic - it just has a more elaborate initial condition and feeds that initial condition into the evolution is a slightly different way. Since they are in one to one correspondance, the computations they reach are the same.
OK, but suppose I allow faster branching, a thousand fold at each step. Then I need 10 bits per step to call the coin flips (2^10 = 1024), but the same scheme works. Same computations reached. To reach a larger space, I need an unbounded per step input. In contact with an external environment is not enough, the information channel needs to be limitless. Any size as large as you please that is not exceeded at any step, guareenteed, means the space reached is the same. Only the possibility of an infinite amount of information entering on a single step, can change the cardinality reached, and break the one to one correspondance. If the amount of branching is variable at each step, same logic applies. If that amount is bounded, you reach the deterministically computables only, nothing added.
Suppose I allow an infinite input at each step, but I know the computation eventually halts. Then again I reach only the computables. To see this, imagine the computation printed out like a CA evolution running down the page, but in the mathematical idealization we have passed to the infinite width limit, and we are flipping a coin for each cell. Well, just turn the paper sideways. Countable, same issue as before, infinite on one axis but not on the other. I encode all the random flips for location i into one mega-digit of my initial (then pair them all up with the ordinary initial). I can do so because the number of steps is finite.
So multiway is not enough. External interaction is not enough. To break the countable cardinality, I need a potentially infinite information channel (countable infinite but not finite) at each of a potentially infinite number of steps (countable infinite but not finite). A countable by countable input is a real cardinality input. If my system evolves according to a real number cardinality initial condition, using an amount of information that is beyond just infinite, but continuum infinity, then and only then I can (though need not) reach a space of possible outcomes larger than the computables. This needed information amount can always be viewed as an initial condition of a sort. Non-deterministic countable infinite initials with halting problems (thus potentially infinite time as well) map to the reals rather than the integers.
If you start at the reals, the computation part adds nothing. You aren't computable because you aren't even name-able, as Chaitin likes to put it. You just have infinitely many unspecified "leaves" of possibility, in no necessary relation to any computational or causative "trunk". Things just happen to be so, no (followable) explanation possible.
Strictly, this only follows if you have not only reals, but the distinction between one real and another matters for outcomes. If all the reals in some ball with positive measure are indistinguishable (within a bounded region you can't tell apart) on the outcome side, you can find a countable cover, by only asking where some representative from within that ball lands. And it doesn't matter how small that ball is, so long as one scale exists below which outcomes are indistinguishable. So in addition to reals, you need infinite sensitivity to the details hidden down potentially infinitely far in the real in question. (Otherwise, you've got a cut off below which it doesn't matter, get a finite length again, turn the paper...).
People frequently speak of continuity as the culprit giving rise to infinite cardinality issues, but this is somewhat misleading. In the native context of analysis, continuity is precisely what tames real number infinities, by giving us that "below epsilon, I don't care, because I will remain within delta" cut off. If a process is programmable complex, however i.e. if it depends on details of its exact digit sequence the way a program depends on every bit, not just on the first n bits with the remainder irrelevant detail - and in addition its specification requires a continuum infinite amount of information, then whackiness ensues.
Philosophically, I think continuum infinities are precisely about "distinguishability breakdown". They arise naturally from trying to consider all possible relations between infinite sets. Individual relations over infinite sets work fine, all possible relations over finite sets also work fine, and even all possible relations of less than infinite sensitivity (some form of continuity analog), over infinite sets, would work fine.
But trying to distinguish all possible infinitely sensitive relations between infinite classes of objects, is trying to distinguish what by hypothesis you've made indistinguishable. You are left with the bare idea "things one has no way of telling apart might in principle still be different". As an abstract mental fact, that can be entertained. Spinning whole mathematics on it is more than a bit like the proverbial angels on the head of the pin. It is in fact just the modernized form of that historical discussion.
The Church-Turing thesis tries to deal with these issues by postulating that only computable processes are realizable in our universe. Of any given outcome, it is in general formally undecidedable whether some deterministic machine-initial pair produces that outcome. We know from enumeration arguments that most possible strings are not computable. But we don't have any (ok, much - see below) reason to think those "possibles" are physically possible as opposed to possible as mathematical abstractions.
The one place people wonder about possible real-infinity processes in our universe is in quantum mechanics or QFT. There we take averages over a space of possible paths, which is continuum infinite. We observe multiway behavior at the level we can distinguish states. When we impute those point-wise, a state at each point, incidentally. One speculative possibility is we might be looking at halves of real possibles with positive measure, real "spannedness", with the extra imputed freedom restricted in reality to the other "endpoint" of an extended event.
There is some confusion over whether the space of reachable computations for QC set ups is any larger than the computables, as a result. The conventional understanding is that it reaches just the computables but may do so faster, by exploiting multi-way evolution. Almost arbitrarily faster for some specific computations - see the previous post on parallel processing. If you can do a countable infinity of ordinary computations in parallel, in finite time you can reach the same computables a deterministic machine can reach in infinite time. This corresponds to one of the "turn the paper 90 degrees" transpositions above.
You don't get full continuum infinities for the countable-cover reason - the projections into measurables are continuous maps. At times some have nevertheless claimed real cardinalities might be reached by QC processes, but that is largely a misunderstanding of the issues. If a QC system actually accesses a countable infinity information channel at each step, then in principle leaving it in "unitary evolution" (no measurement projecting onto some chosen countable cover) would "visit" a continuum infinite state-space. But perform any measurement and the computation is finite in the time dimension, and you are back in computable-land. You just may have gotten there in finite time rather than infinite time - if you really had access to a countably-infinite parallel process (rather than just a bounded multiway process) for finite time.
Since QFT is our most successful fundamental theory and it is both non-deterministic and uses continuum infinities to specify states and processes, there are reasonable doubts about the thesis that everything physically realizable is deterministically computable. But the requirements for getting beyond the computables are more stringent that many suppose. I hope this helps.
Report this post to a moderator | IP: Logged