wolframscience.com

A New Kind of Science: The NKS Forum : Powered by vBulletin version 2.3.0 A New Kind of Science: The NKS Forum > NKS Way of Thinking > The class 3 problem, PCE, and a cup of coffee
  Last Thread   Next Thread
Author
Thread Post New Thread    Post A Reply
Jason Cawley
Wolfram Science Group
Phoenix, AZ USA

Registered: Aug 2003
Posts: 712

The class 3 problem, PCE, and a cup of coffee

One of the basic open problems in NKS is what I call the class 3 problem. This asks whether class 3 rules are in general capable of universal computation. While class 4 rules have been proven to be capable of universal computation (in the infinite size limit), to date no one has shown the same for a class 3 rule. Doing so would be an important step, and a valuable contribution to NKS. Showing that some large subset of class 3s are not capable of universal computation would likewise be an important contribution.

By class 3 rules here, I mean those that intrinsically generate apparent randomness, without the readily apparent localized structures of class 4 rules. People can quibble about any particular case, and no doubt some would want to hive off a small number of 3s that were shown to be universal – if that were accomplished - as “really” being 4s. But if rules whose typical behavior from random initials looks more like rule 30 than 110 were shown to be universal, it would be progress. These are the sorts of rules Rudy Rucker likes to call “seething”, or that Andy Wuensche calls “chaos”. The length of their minimal Boolean expressions typically get long (and complicated) quite fast, another objective sign of class 3s. They get fairly common in more general classes of CAs than the 2 color 1D case.

What is known about 3s is that they are in general computationally irreducible. That is not quite the same as universal, and whether there is a gap between the two or they tend to coincide in practice, is an important part of what the Principle of Computational Equivalence (PCE) conjecture is about. It is easy to underestimate how much PCE is claiming, or to imagine it is saying something obvious. It isn’t. Abstractly, it is entirely possible that most complexity we see is computationally irreducible without being as sophisticated as universality – or its finite analog. PCE claims instead that the underlying unity we experience as “complexity”, is the finite analog of universality – sufficient internal flexibility in a system, to support universal computation, if given the standard idealized additional aids of infinite size etc.

We see complexity in the 3s and in the 4s. We can determine that the 3s and the 4s are computationally irreducible. We might leave it at that, and think “computational irreducibility is the distinctive mark of perceived complexity, its underlying cause”. Many of the conclusions of NKS would hold if this were all one could say about complexity from simple rules. Most of the methodological and practical implications NKS has for how we should investigate and deal with complex systems, depend only on irreducibility, and not on universality in addition. But PCE is a stronger claim than this, one that sticks its neck out and might well prove to be wrong in the end. But that has the potential to unify our understanding of complexity. It claims more about the subject.

Universality implies (at least occasional) irreducibility, but not the other way around. PCE claims that most of the complexity we actually see occurs with both of them, not the first in the absence of the second. (To see the first implication, just note that a universal system can implement any finite algorithm. It can therefore implement a given irreducible, class 3 behavior). In effect, PCE says that chaotic systems are in principle programmable, even if in practice we can’t manage to program them.

This might be wrong. Being able to program something might depend on minimizing information spread in various ways. It might require some degree of segregation of effects or control or internal order. Maybe class 3s just can’t be made to exhibit this. Maybe they are irreducible and random, but not orderable. Perhaps all we can do with them is measure gross behaviors and averages, which perhaps have to follow various laws of large numbers and related statistical facts. We might have a thermodynamics of chaos, but never achieve programmability of chaos. And this might be not just a practical limitation stemming from our meager engineering ability, but in the nature of the formal systems themselves. If that were true, then PCE as originally stated would not be correct. Class 3 would be essentially distinguishable from 4, as irreducible but not universal.

It might be worth mentioning in passing that no one has yet shown that all class 4s are universal, either. But the case for it is quite strong, and few familiar with NKS doubt it. The reason is the process whereby we establish universality for some class of systems, works as a ratchet. The simpler a system we’ve already managed to prove is universal, the easier it is to prove the next one is. Because we only have to show the new candidate system can emulate any class of systems a member of which has already been shown to be universal – or even just a particular instance of a known universal system. It is a lot easier to prove some formal system can emulate a 2 color 1D CA, or just 110 alone, than it is to prove it can emulate any possible Turing machine. We can see how past constructions using class 4s have established possible emulations. We know what those constructions require, in the way of sub elements we can manipulate. It is an inductive generalization, but one we can feel pretty confident about.

What does all this have to do with the cup of coffee example? Well, turbulent fluids seem readily to produce intrinsic randomness. We expect sensitive dependence on initials, from what we know of their underlying rules. Point changes in initials tend to amplify and spread, leading to changes in a large number of adjacent areas latter on in the evolution of the system. We see the same thing in 3s. The question of the computational capabilities of a cup of coffee is thus an instance of the class 3 problem, in effect. The question then is, can any scheme be imagined whereby we could set up initial conditions for the fluid, tweaked and poked just so, so as to get its ordinary ongoing evolution, to perform some definite computation for us? And not just a limited one it happened to be good at, like “average out temperatures” or “mix local concentrations”, but one that was effectively up to us?

It may seem an incredible suggestion at the outset. But take a step back and look at how credible what we do all the time would seem, to people from a different age who had no notion of formal computation. We arrange delicate patterns of local charge – themselves nothing but small deviations from average local concentrations of astronomical numbers of gossamer particles - on tiny grains of glorified sand. Any thought I can come up with, that is clear and distinct enough to be reduced to a formalism, I can represent as some abstract encoding – a mathematical object off in the Platonic realm of the ideas – and then I can get the same hunk of sand to exactly reproduce that formal pattern, in its microscopic arrays of charge. And remember it as long as I like. And subject that pattern to arbitrary formal manipulations, with perfect reliability and control. We speak of “1s and 0s”, but there are no 1s or 0s in a computer. Nor are there any 1s or 0s between my ears. They are off in the realm of the ideas, and stuff between my ears and arrangements of gossamer charges on glorified sand just manage to be brought into correspondence with each other, by both being in correspondence with the same abstract pattern or idea.

But don’t we require quite stable and predictable components to do this? More reliable components are helpful, certainly, but if computation depended on them absolutely we would never have been able to scale our computing systems as we have. No, we rely on methods of error correction, that in turn exploit the exponential decrease in likelihood of multiple independent failures, to get reliable arrangements of unreliable individual components. As a matter of engineering, it helps to have reliable logical atoms, but the reliability of the assemblage of atoms can be greater than that of the individual parts. A poor scheme of assembly multiplies modes of failure and results in a whole less reliable than its parts, like chance of failure to the power of the number of components. But dedicating a portion of the system to check sums and redundancy and the like, can damp and overwhelm this tendency. So no, in principle computation does not depend on absolute reliability of individual components.

We need branching and looping. We need to be able to do different things depending on some prior thing, and even modest dependence on previous conditions allows that. We need to be able to do the same thing over and over, and following regular laws gives that. We need to be able to combine these things in arbitrarily sophisticated ways, and there is the rub. Do turbulent fluids do so? Does rule 30 do so? They do it in computationally irreducible ways, so in a sense we know they have enough going on. But we don’t know if that “enough” can be partitioned or controlled. We know it would be hard for us, compared to the relative ease of manipulating distinct local particles in a class 4. But we don’t know (yet) whether it is possible in principle, for typical class 3s.

That is the class 3 problem, and PCE predicts a definite answer to it. A prediction is not yet a result. It is a guide to research, and it marks out the problem as important – whichever way the answer winds up going. That is what a hypothesis is for. It sticks its neck out and risks being wrong, to light up the terrain ahead of us, and get us to ask the right questions and end up knowing more than we know today, as a result. We should entertain the possibility that class 3 systems can compute, in exactly the same way any practical computer can. But we do not know this, and we should try to move from entertaining the possibility to knowing for a fact that it is, or that it isn’t, so.

How might it be true? It may be that the uncontrolled spread and interaction of local state information typical of class 3 systems, is a barrier to our following the computation being performed, to understanding it, and to proving things about it – but is not an equal barrier to such computation occurring or being flexible. The difference between 3s and 4s might not be an essential one, one irreducible but not universal and the other both irreducible and universal. Instead it might be that both are irreducible, the 3s a bit more obviously so than the 4s, and both are universal, but this second attribute is much more obvious in the 4s, easier to see, track, and prove things about. From an engineering point of view that is an important difference. When we want a practical way of computing things, ease of our understanding and control is essential. But when we just want to understand what the system might be capable of doing, it is not essential. The system doesn’t require us to prove it can do something, to do it. We know from other contexts that proof is a distinct concept from truth, and often a much harder one to fulfill. Maybe it is very hard to prove universality for the 3s, but still true of them.

A hundred years ago nobody knew universal computation existed. Ten years ago few suspected that a system as simple as an ECA might be capable of universal computation, without ever having been designed to have that property. But systems capable of universal computation nevertheless existed, if anywhere in nature there are class 4 phenomena, which NKS thinking certainly tell us is the case. Those systems did not need us to understand what Turing and company discovered, to be capable of universal computation (or its finite analog, which is all our practical computers have anyway). If the evolution of a turbulent fluid is a class 3 system, and irreducibly complex – as few would deny – it did not need anybody to understand this for it to be true. Well, why should anyone need to know how to limit initial conditions and encode states in tiny vortices, so as to get a turbulent fluid to perform some non-trivial computation, for that fluid’s time evolution, to already and in fact be performing a non-trivial computation? Perhaps not only an irreducibly complicated one, but any finite given algorithm, for some (perhaps tiny, perhaps “measure zero”) set of conceivable initial conditions?

Is this in the end any more fantastic than the notion that tiny deviations from exactly zero net charge in microscopic regions of glorified sand, should be capable of sequences of arrangements, such that the same glob of sand in the same environment and following the same state update rules, can encode and mimic any finite sequence of finite formal structures anyone can conceive? On the face of it, that is just as fantastic. But it is also a fact, the basis on which the computer in front of you works.

Report this post to a moderator | IP: Logged

Old Post 02-11-2005 07:45 PM
Jason Cawley is offline Click Here to See the Profile for Jason Cawley Click here to Send Jason Cawley a Private Message Edit/Delete Message Reply w/Quote
Tony Smith
Meme Media
Melbourne, Australia

Registered: Oct 2003
Posts: 168

We may not need Class 3 universality

Jason, thanks again for setting out another core issue so clearly. It made me dig back into my notes on my still not half-started project to argue an alternative perspective.

The first thing I found was a couple of diagrams which I've combined in the attached GIF. I trust the upper diagram concurs with what you have said above about the PCE. The lower diagram represents my thinking at the time on the Edge-of-Chaos/Border-of-Order alternative perspective of Class 4. (If I had time to redo the lower diagram in a more editable format I might make a few minor adjustments, but the key thrust would remain.)

This all builds on my previously mentioned Beating The Retreat page to which I have more recently added a particularly relevant historical footnote:

The retreat was started by a 1993 paper by Melanie Mitchell, Peter Hraber and James Crutchfield which actually questioned Norman Packard's evidence for Chris Langton's problematic proposal for a linear characterisation of CA rule space (which Langton called λ) rather than more general evidence for the edge.
I might add here that I am increasingly confident that the fullness of time will come to see Langton's λ as a straw man and that the use of Mitchell et al's limited counterpoint as a pretext to avoid the hard work of searching for that elusive Edge/Border was a lazy way for the fledgling complex systems community to avoid work that will eventually prove inescapable. (That they mostly went off to hawk their wares to anybody who could be sold the idea of a fractional percentage advantage in derivative or currency trading is a subject we should leave for somewhere else.)

Tony Smith has attached this image:

__________________
Tony Smith
Complex Systems Analyst
TransForum developer
Local organiser

Report this post to a moderator | IP: Logged

Old Post 02-14-2005 09:44 AM
Tony Smith is offline Click Here to See the Profile for Tony Smith Click here to Send Tony Smith a Private Message Visit Tony Smith's homepage! Edit/Delete Message Reply w/Quote
Jason Wesley Ellis


Registered: Mar 2004
Posts: 19

Jason

You say that it hasn't been proven yet that all class fours are programmable,,,this brings to my mind the question 'what exactly defines a class 4 or a class 3?' I would guess most anyone who has read ANKS would have a general idea, but are there any rigorous definitions? Is this even possible?? My guess is yes.

Oh yeah, what kind of modelling has been done to illustrate how we demonstrate universality? I am thinking here of relating a rule (or whatever) to the construction we build and what happens with that rule after that,,,maybe mapping would be a better term. To what degree are universal systems equivalent? {{how are our demonstrations of universality similar or different---or can they even compare??}}

Report this post to a moderator | IP: Logged

Old Post 11-22-2005 12:43 AM
Jason Wesley Ellis is offline Click Here to See the Profile for Jason Wesley Ellis Click here to Send Jason Wesley Ellis a Private Message Click Here to Email Jason Wesley Ellis Edit/Delete Message Reply w/Quote
Jesse Nochella
WRI

Registered: Mar 2004
Posts: 132

Here's a hypothesis.

What about this: If you start a class 4 system with simple initial conditions and run it, it will eventually not generate any new behavior. Whereas a class 3 system can generate new output forever.

For example, if you start a cellular automaton rule like rule 110 with a simple initial condition such as a single black cell with an exactly white uniform background, it will eventually get to a point that will yield no new behavior, so that we may be able to predict the output from there on with some kind of formula. It seems that as for the case of a sequence of black and white cells on a uniform white background, every condition eventually does this.

This is obviously not the case for rules like rule 30. It seems that rules like these will forever generate output that we would not be able to predict with some formula after we go some number of steps.

But there are examples of rules where it is not clear whether they will 'terminate' or not even after a long time. What about these? I suppose that just as there are ways for programs like rule 30 to to be proven to never repeat from a single black cell on an infinite white background (I heard that somebody found a way to prove this, but don't know much more about it), there might be ways to prove this for other rules form given conditions.

Besides, if something is not going to go on forever making new stuff, doesn't that imply that it is at some point going to have some reducible character? And doesn't the fact that something will for sure keep generating new stuff, that is is akin to the behavior we see in rules like rule 30, and is therefore what we would call class 3?

If a rule like rule 110 sometimes very elaborately I suppose would do something that in effect generated new output forever, would it then in that case be more like rule 30?

See if that works.

Report this post to a moderator | IP: Logged

Old Post 11-22-2005 05:32 PM
Jesse Nochella is offline Click Here to See the Profile for Jesse Nochella Click here to Send Jesse Nochella a Private Message Edit/Delete Message Reply w/Quote
Post New Thread    Post A Reply
  Last Thread   Next Thread
Show Printable Version | Email this Page | Subscribe to this Thread


 

wolframscience.com  |  wolfram atlas  |  NKS online  |  Wolfram|Alpha  |  Wolfram Science Summer School  |  web resources  |  contact us

Forum Sponsored by Wolfram Research

© 2004-14 Wolfram Research, Inc. | Powered by vBulletin 2.3.0 © 2000-2002 Jelsoft Enterprises, Ltd. | Disclaimer | Archives