Jason Cawley
Wolfram Science Group
Phoenix, AZ USA
Registered: Aug 2003
Posts: 712 
Semantic discussions in this area tend to be remarkably sterile. Cybernetics spent decades debating finer points in this respect while adding little to our overall understanding of complexity as a result. (Don't get me wrong  some of the original ideas of that approach were fine, but a lot of the later stuff spun off into bad secondhand philosophy). Actual progress in understanding complex systems has instead typically come from investigating new types of systems or classes of behavior, with new problems or computer experiments or simulations. That said, I will address some of your points because I think they are common, and miss part of the argument of the NKS book.
Wolfram has a thesis about the cause of complexity. Not just a proposed semantic domain of reference for the word. He might be wrong about it, but he has evidence, and that evidence pretty clearly rules out some long standing alternate proposals about its causes. Wolfram's thesis is that complexity is caused by computational universality. Or, for the mathematical quibblers, by a finite state analog of it that I will call "programability". (Pentium chips aren't infinite nor have they run for infinite time nor with access to unlimited memory. In an arbitrarily strict sense, therefore, some results about the mathematical construct "universality" do not apply to them. But in practice they are "universal" computers, which we use to impliment any algorithm by changing their program not by tinkering with the hardware).
Wolfram is saying, once any system is across the threshold of universality, its behavior will appear to us complex, because it is. While for systems that fall short of that threshold, only a "measure zero" subset will appear to us complex, while actually being reducible by some nonobvious "crack", in some appropriate basis or formulation. (These cases  call them the "crackables"  appear intricate but actually have some short formula that fully reduces the computation the system performs). While most systems on the near side of the universality line will appear to us as simple, because they are. Two broad regions, not universal and universal. One thin grey area between, that is reducible but doesn't look it at first sight.
Now, there is a fairly serious open question here where Wolfram's thesis might be wrong. It might be the case that truly irreducible complexity arises short of the universality threshold. We call this the "class 3 problem"  relating it to the question whether class 3 systems in general can support universal computation or not. To date, no one has proven a class 3 system is universal. And it may be the information flow within class 3s is too uncontrollable to support universality. The reason this is a serious matter for Wolfram's thesis is, if 3s aren't universal then universality may be sufficent for complexity but is not necessary for it. And if that is the case, it cannot be said to be the cause of complexity, in general. One would then need to look for a general cause of computational irreducibility, besides universality.
But irreducibility and complexity are going to coincide regardless, up to a modest number of merely apparent ("crackable") cases. Which means, as soon as one has shown that something is irreducible, any alleged necessary cause of complexity had better already be there. If x is absent and the system is already computationally irreducible, then x is not necessary for complexity. x is therefore not the cause of complexity, in general. This same reasoning applies to previous guesses about the cause of complexity, and it demolishes most of them.
Take interaction, aka the "open systems" nostrum. If being open were necessary for complexity, then no closed system would be irreducibly complex. But plenty are (rule 30 from simple initials e.g.). Ergo, openness is simply not necessary for complexity. It is therefore not the cause of complexity, in general.
The open idea was based on intuitions from thermodynamics and statistical mechanics. Which dealt with cases with plenty of interacting elements with many degrees of freedom, yet gave perfectly predictable (though statistical) results. Why didn't these always hold? That was the question. And people noticed that the theorems involved depended on system closure. So, having seen a negation of the result, they inferred a negation of that premise. But actually, those theorems also depend on long run final behavior as the thing being classified, not the "transients" involved in getting there, dynamically. If a system is "all transient", it can be as closed as you please, and the old theorems won't apply during the whole period you care about.
Open systems are in effect a way of importing randomness from the environment. And sure, some systems  even very simple ones in their internal composition  can behave in complicated ways if you put them in a complicated environment. But this does not explain the origins of complexity. It just kicks it outside. When people had lots of tools for modeling simple systems, it was natural to put a fairly involved, fully modeled but fundamentally simple system in a random environment. It has 10 linear ways of behaving and the heat bath picks which at time t. This allows some engineering analysis. But it doesn't actually explain where the complexity is coming from.
One can set up interacting versions of standard NKS systems easily enough. This effectively just gives them another "initial" condition  or a boundary condition threaded through time if one prefers. (One can encode each as a version of the other using additional colors if one wants). This readily gives outcomes similar to switching among rules, or passing system states to another system which then evolves them, or to multiple (usually) noninteracting systems (with a higher number of colors) subsisting on the same lattice, and only occasionally interfering. But the basic forms of behavior possible do not change. In a hypothetical infinite limit one might make some hand waving cardinality arguments (having countable boundary conditions in two directions), but for any realistic finite case, the space of reachable computations is the same.
There are undoubtedly systems that it makes sense to model as interacting with an external environment, particularly when the rate of information transfer inside the system is much, much higher than the transfer across the system boundaries. And/or when the information transfered across those boundaries has definite characteristics or structure not found inside the system (monotonic, periodic, purely random and normal, etc). And when this is the case, a good model of the system will separate system and environment and show them in interaction. Fine. But this is a separate question. It does not address the claim that interaction causes something fundamentally new, or gives rise to classes of behavior not otherwise seen. Such claims are commonly made, but (up to the quibbles above any way) clearly false. I don't need an open system to get a complicated system. Not all closed systems are simple. The claim that they are is not an interesting alternative idea about complexity or a dispute about how to use terms, it is just plain wrong.
A more basic error is the idea that complexity arises automatically from lots of parts interacting. There are just as many elements in rule 250 as in rule 30, and they interact according to the same rule *form* (nearest neighbors, 2 colors, deterministic, etc). But one behaves in a complicated way and the other does not. Nor does rule 30 require a long initial condition to do something complicated. One might think lots of things happening in parallel is still necessary, though not sufficient. Lots of things happening eventually, sure  that follows from the definition of "complex". But "in parallel", no, not necessary. Complexity arises sooner and more easily with lots going on at once. But e.g. a Turing machine or even a mobile automaton can produce complicated behavior from a single active cell.
Then we get the idea that time is essential to complexity, that it is a matter of "process" rather than "structure". There was a philosophic school built around making much of this supposedly essential distinction. It is certainly true that a rule can be simpler than what that rule does when it is nested, or that the guts of a loop can be less than the loop. But this essentially involves the idea of repeating an element, not the idea of time. Formally, it does not matter what is repeated and in what dimension or direction. Over and over again happens in a crystal in space, and in a sequence in time. It is the over and over that matters, not the direction. Otherwise put, any complex phenomenon can be encoded into a structure and a purely formal, abstract one at that. A purely formal abstract structure is off in some platonic realm of ideas or softwareland, not in time.
It does not matter in any of this what a particular school of thought wants complexity to mean or where it thought it comes from. Nor do one's personal associations with the idea matter  what it reminds you of, the context you'd like to think of as canonical, etc. Critical distinctions prune away the mass of mere associations from what is actually essential to a given phenomenon, separating frequent but unnecessary companions of appearances from actual underlying formal causes.
Irreducible complexity is a phenomena in formal systems, even as pure structures entirely closed and outside of time, irrespective of parallelism, a possible but not necessary consequence of lots of interacting elements. It is not an illusion of system definition or how you cut x off from the rest of the world around it, does not depend on underlying stochastic anything, does not depend on empirical instantiation, does not depend on time, does not depend on interaction. It arises at the level of software, when a certain degree of internal flexibility or crosswiring interaction is reached and then repeated.
The threshold for it is quite low  in recursive function theory it appears with "while" loops, in CAs it arises already in the elementary nearest neighbor case, in TMs it arises with 2 states and 5 colors, and perhaps with as few as 3 colors, in number theory it arises in integer Diophantine equations  which, in case it isn't obvious, are not an open system interaction process in physical time. The idea that all these other forms of empirically complex "hair" are necessary for "real" complexity is simply untrue  it happens in their absence quite nicely, thank you very much.
And when the formal causes of it are present in some empirical case, it will arise there too, for exactly the same formal reasons it arises in purely formal structures. Without caring even a smidgen for philosophic conundrums about empirical instantiation or real time ness or subsetting out of the "everything connected to everything else" universe. If the formal structure of relations among a system's parts is isomorphic to a formally complex, irreducible structure in the abstract, then those relations will be irreducibly complex in the real world too. That is why complicated software instructions make chips do complicated rather than simple things, while simple instructions make them do simple things.
There is such a thing as programability (or universality). We didn't invent it, nature has been doing it all along. But having discovered it (first in elaborate formal constructions of our own, specifically designed to have it as a property), we are now able to recognize it in all sorts of external phenomena. And to distinguish it, critically, from merely habitual associations and vague ideas about the specialness of this or that.
Wolfram recently and others before him, from Church and Turing on, have shown enough that it is by now clear that irreducibility does not require most of the things previous schools of thought imagined complexity needs. He may be right that programability and complexity coincide, or there may be a band of irreducibility shy of programability. But one certainly does not need *more* that universality to get to complexity. And getting to universality is very fast  a hop and a skip  and does not require the whole soup of pet ideas peddled by previous complexity notions. It might be even easier to get to complexity than to get to universality  that is the class 3 problem  but it isn't harder.
I hope this is interesting.
Report this post to a moderator  IP: Logged
