wolframscience.com

A New Kind of Science: The NKS Forum : Powered by vBulletin version 2.3.0 A New Kind of Science: The NKS Forum > NKS Way of Thinking > Systems, Complexity and Information
  Last Thread   Next Thread
Author
Thread Post New Thread    Post A Reply
Paul Pomeroy
AdaptiveView

Registered: Aug 2004
Posts: 4

Systems, Complexity and Information

I've had the feeling for many years now that our approaches to computational and artificial intelligence are based upon incorrect "working definitions" of the following three words: System, Complexity and Information. I've read through enough of Stephen Wolfram's book, now, to become convinced that it is, to varying extents, also based upon the same inaccuracies.

I find that articulating what I mean in regards to these words, and the significance of what I mean, very difficult. To an extent this is because of the "semantic space" they are thought to cover. Thinking about them, I find, often ends up feeling somewhat analogous to trying to fit 2 cups of Jello into a 1 cup container. Be that as it may, I'm quite willing to concede that my inability to articulate what I mean is simply that: my inability. I'm hoping that people reading the following will forgive me this and not simply dismiss it because of a poor presentation. More so, I'm hoping that others will "get" what I'm trying to convey and do a better job of explaining things.

System:

The problem isn't with the definition of "system," itself, but in how one defines the boundaries of a system -- i.e., what is and isn't part of a system. As far as I can tell, the membership function for anything we can classify as a system will boil down to this:

Anything that affects and is affected by a system is part of that system.

That seems like a fairly innocuous statement until you begin to really think it through. The more I've followed it to its conclusion, the more I've felt there must be something wrong with it, but if there is, I can't find it. What it implies, and ultimately what it means, is that almost everything we call a "system" is actually a subsystem. Put another way, what it means is that it's pretty much all one System.

Where I have some difficulty with the above is in regards to logical/ideal systems, including math related systems. These appear to be truly closed systems, capable of affecting but not being affected by externals. I think, though, that even though we use the word "system" in these cases they are actually, and fundamentally, something different from what we mean when we talk about things like a computer system or biological and environmental systems. It is a mistake, nonetheless, to assume that the closed nature of these logical/ideal "systems" is maintained once they become applied to anything. Implement a "system of logic" in a machine, for example, and that implementation becomes part of the [holistic] System. The implementation both affects and is affected.

The problem with almost all computational intelligence "systems" (and "software systems," in general) is that they are designed and implemented under the influence of a false belief that they are and will remain closed Systems. The computational intelligence landscape is littered with them -- "systems" abandoned because they unexpectedly broke something or were themselves broken by something unexpected.

Complexity:

Wolfram takes an "outsider's" approach to the definition of complexity, not really saying what it is but choosing instead to focus on how it is measured and how that measurement is described. I don't blame him. I think that whenever "the whole is greater than the sum of its parts," it's a safe bet that there's complexity involved but solving for x, where x = whole - sum(parts), is a pretty tough assignment. The problem I see with Wolfram's approach is that it misses the "nature of complexity." There is a dynamic power in complexity -- it is actively productive over time. That power is proportional, not necessarily or exclusively to the number and kind of elements in the system, but to the effective connections between its elements.

I don't have a precise definition for complexity to offer here, but I don't need one to show what's wrong with typical "working definitions" of the word. They fail to account for complexity's dynamism and power. Wolfram looks at the rules and the results of "simple algorithms" and finds the telltale signs of complexity. There is a great deal of significance and value (and often a certain kind of elegance) in what he has to say in this regard. What I'd like to suggest, though, is that his focus is too static -- he wants complexity to be a noun but it's really more of a process and, as a consequence, it seems to me that he ends up mistaking the description for what he's attempting to describe. The complexity he alludes to isn't in the rules or the results but in, and only during, the process of applying the rules. It's while a CA algorithm is running that connections between elements are made and/or become effective. In a sense, the processing of the rules creates complexity and the complexity produces results.

It's quite easy to miss the significance of this, so let me put it another way. Creating effective connections between things creates complexity and complexity is productive -- it always does something (for better or worse, ready or not). What it does is, as Wolfram points out, only predictable up to a point -- with too many simultaneously effective connections there is no way to predict what complexity will produce. This unpredictability is compounded by the fact that active complex "systems" are really subsystems and there are therefore effective connections that extend out to some (also unpredictable) extent into the "real world."

What is predictable about complexity is that its power (its ability to produce) is in some way proportional to the number of effective connections it is created from. If you want to change the world, create something that connects a lot of things (or even better, something that connects a lot of things that are already well connected). Just don't make the mistake of thinking that you will have any way of predicting or controlling how the world will be changed or, worse, think that some particular aspect of the world won't change simply because you didn't intend it to.

Information:

This is another word that is used as a noun but signifies a process. There's actually a hint of this in the the word, its -ation suffix meaning "the process of," and there's a more subtle hint in the fact that we say "for your information" (the ubiquitous FYI) rather than "information for you." However, since about 1950 the semantic space covered by this word has drifted away from this idea of process. This, I believe, is partly due to (or is at least coincidental with) the effects of Claude Shannon's paper, "A Mathematical Theory of Communication." It seems all but completely forgotten at this point that he qualified in this paper his use of the word "information," saying, in effect, that of course he didn't really mean "information" as that would imply a whole host of semantic issues his theory didn't need to deal with.

He only uses the word "information" three times in this paper and I can see no reason why the word "data" wouldn't have been a better choice of words. What he seems to have accomplished was to launch the Information Age based upon a concept of semantic-less (meaningless) information. I don't know about you, but "meaningless information" strikes me as being somewhat of an oxymoron. I was discussing this with an etymologist a few years ago and he reminded me that it was pointless to argue that a particular word had acquired the wrong meaning -- a word means whatever the people using it think it means. I remember being a little unhappy with this idea at first but then it dawned on me that this was exactly the point I was wanting to make. The meaning isn't "out there" in some static form that can be recorded, or transmitted, or stored, or ...

Information isn't a thing. It is, to us at least, a phenomenological experience of the mind's response to stimuli. Our illusion of information is very real, so real that it makes it difficult to see that behind it is a mind doing some very, very complex processing and that without mind (without a sufficiently complex "receiver"), there is no information. There is more to it than just a receiver, though. At the risk of waxing a bit too poetic, the illusion of information is borne on the wings of shared assumptions.

Call this arrangement of black and white pixels you're currently reading a message or posting or whatever, but it is really a designed encoding of data, its design intended to evoke particular responses and guided by my assumption that you read and can comprehend English (one of many assumptions). We are so adept at language that most of what goes on when we speak and listen (or write and read) goes on below the level of consciousness. It takes effort to see that the illusion of information requires mind and even more effort to see that mind requires society (i.e., that minds are social -- they require the shared conventions and knowledge represented by the society they are immersed in).

Along with computational approaches that have been clobbered by something outside of the system boundaries they were designed for, or by the products of the unintended complexity they created, are approaches that have failed because someone mistook data for information. Some of the early AI projects seem almost comical now were it not for the billions of dollars that were wasted on them. It took a very long time for designers to discover that Shannon's "meaningless information" was to their software just that: meaningless. People's "common sense" notion of information is still wrong. The recent uproar over Google's GMail software that adds "targeted advertisements" to the email display is a good example. Behind it is the common belief that emails contain information and that Google's software must be getting the same meaning out of them that a person would. "How else," one can imagine them asking, "could Google know which ads to include?"

----------------

I could write more but have most likely already written too much. I'll just post this as is and hope that there's enough in the above to warrant a discussion of the issues.

Report this post to a moderator | IP: Logged

Old Post 08-24-2004 11:15 PM
Paul Pomeroy is offline Click Here to See the Profile for Paul Pomeroy Click here to Send Paul Pomeroy a Private Message Click Here to Email Paul Pomeroy Edit/Delete Message Reply w/Quote
Jason Cawley
Wolfram Science Group
Phoenix, AZ USA

Registered: Aug 2003
Posts: 712

Semantic discussions in this area tend to be remarkably sterile. Cybernetics spent decades debating finer points in this respect while adding little to our overall understanding of complexity as a result. (Don't get me wrong - some of the original ideas of that approach were fine, but a lot of the later stuff spun off into bad second-hand philosophy). Actual progress in understanding complex systems has instead typically come from investigating new types of systems or classes of behavior, with new problems or computer experiments or simulations. That said, I will address some of your points because I think they are common, and miss part of the argument of the NKS book.

Wolfram has a thesis about the cause of complexity. Not just a proposed semantic domain of reference for the word. He might be wrong about it, but he has evidence, and that evidence pretty clearly rules out some long standing alternate proposals about its causes. Wolfram's thesis is that complexity is caused by computational universality. Or, for the mathematical quibblers, by a finite state analog of it that I will call "program-ability". (Pentium chips aren't infinite nor have they run for infinite time nor with access to unlimited memory. In an arbitrarily strict sense, therefore, some results about the mathematical construct "universality" do not apply to them. But in practice they are "universal" computers, which we use to impliment any algorithm by changing their program not by tinkering with the hardware).

Wolfram is saying, once any system is across the threshold of universality, its behavior will appear to us complex, because it is. While for systems that fall short of that threshold, only a "measure zero" subset will appear to us complex, while actually being reducible by some non-obvious "crack", in some appropriate basis or formulation. (These cases - call them the "crackables" - appear intricate but actually have some short formula that fully reduces the computation the system performs). While most systems on the near side of the universality line will appear to us as simple, because they are. Two broad regions, not universal and universal. One thin grey area between, that is reducible but doesn't look it at first sight.

Now, there is a fairly serious open question here where Wolfram's thesis might be wrong. It might be the case that truly irreducible complexity arises short of the universality threshold. We call this the "class 3 problem" - relating it to the question whether class 3 systems in general can support universal computation or not. To date, no one has proven a class 3 system is universal. And it may be the information flow within class 3s is too uncontrollable to support universality. The reason this is a serious matter for Wolfram's thesis is, if 3s aren't universal then universality may be sufficent for complexity but is not necessary for it. And if that is the case, it cannot be said to be the cause of complexity, in general. One would then need to look for a general cause of computational irreducibility, besides universality.

But irreducibility and complexity are going to coincide regardless, up to a modest number of merely apparent ("crackable") cases. Which means, as soon as one has shown that something is irreducible, any alleged necessary cause of complexity had better already be there. If x is absent and the system is already computationally irreducible, then x is not necessary for complexity. x is therefore not the cause of complexity, in general. This same reasoning applies to previous guesses about the cause of complexity, and it demolishes most of them.

Take interaction, aka the "open systems" nostrum. If being open were necessary for complexity, then no closed system would be irreducibly complex. But plenty are (rule 30 from simple initials e.g.). Ergo, openness is simply not necessary for complexity. It is therefore not the cause of complexity, in general.

The open idea was based on intuitions from thermodynamics and statistical mechanics. Which dealt with cases with plenty of interacting elements with many degrees of freedom, yet gave perfectly predictable (though statistical) results. Why didn't these always hold? That was the question. And people noticed that the theorems involved depended on system closure. So, having seen a negation of the result, they inferred a negation of that premise. But actually, those theorems also depend on long run final behavior as the thing being classified, not the "transients" involved in getting there, dynamically. If a system is "all transient", it can be as closed as you please, and the old theorems won't apply during the whole period you care about.

Open systems are in effect a way of importing randomness from the environment. And sure, some systems - even very simple ones in their internal composition - can behave in complicated ways if you put them in a complicated environment. But this does not explain the origins of complexity. It just kicks it outside. When people had lots of tools for modeling simple systems, it was natural to put a fairly involved, fully modeled but fundamentally simple system in a random environment. It has 10 linear ways of behaving and the heat bath picks which at time t. This allows some engineering analysis. But it doesn't actually explain where the complexity is coming from.

One can set up interacting versions of standard NKS systems easily enough. This effectively just gives them another "initial" condition - or a boundary condition threaded through time if one prefers. (One can encode each as a version of the other using additional colors if one wants). This readily gives outcomes similar to switching among rules, or passing system states to another system which then evolves them, or to multiple (usually) non-interacting systems (with a higher number of colors) subsisting on the same lattice, and only occasionally interfering. But the basic forms of behavior possible do not change. In a hypothetical infinite limit one might make some hand waving cardinality arguments (having countable boundary conditions in two directions), but for any realistic finite case, the space of reachable computations is the same.

There are undoubtedly systems that it makes sense to model as interacting with an external environment, particularly when the rate of information transfer inside the system is much, much higher than the transfer across the system boundaries. And/or when the information transfered across those boundaries has definite characteristics or structure not found inside the system (monotonic, periodic, purely random and normal, etc). And when this is the case, a good model of the system will separate system and environment and show them in interaction. Fine. But this is a separate question. It does not address the claim that interaction causes something fundamentally new, or gives rise to classes of behavior not otherwise seen. Such claims are commonly made, but (up to the quibbles above any way) clearly false. I don't need an open system to get a complicated system. Not all closed systems are simple. The claim that they are is not an interesting alternative idea about complexity or a dispute about how to use terms, it is just plain wrong.

A more basic error is the idea that complexity arises automatically from lots of parts interacting. There are just as many elements in rule 250 as in rule 30, and they interact according to the same rule *form* (nearest neighbors, 2 colors, deterministic, etc). But one behaves in a complicated way and the other does not. Nor does rule 30 require a long initial condition to do something complicated. One might think lots of things happening in parallel is still necessary, though not sufficient. Lots of things happening eventually, sure - that follows from the definition of "complex". But "in parallel", no, not necessary. Complexity arises sooner and more easily with lots going on at once. But e.g. a Turing machine or even a mobile automaton can produce complicated behavior from a single active cell.

Then we get the idea that time is essential to complexity, that it is a matter of "process" rather than "structure". There was a philosophic school built around making much of this supposedly essential distinction. It is certainly true that a rule can be simpler than what that rule does when it is nested, or that the guts of a loop can be less than the loop. But this essentially involves the idea of repeating an element, not the idea of time. Formally, it does not matter what is repeated and in what dimension or direction. Over and over again happens in a crystal in space, and in a sequence in time. It is the over and over that matters, not the direction. Otherwise put, any complex phenomenon can be encoded into a structure and a purely formal, abstract one at that. A purely formal abstract structure is off in some platonic realm of ideas or softwareland, not in time.

It does not matter in any of this what a particular school of thought wants complexity to mean or where it thought it comes from. Nor do one's personal associations with the idea matter - what it reminds you of, the context you'd like to think of as canonical, etc. Critical distinctions prune away the mass of mere associations from what is actually essential to a given phenomenon, separating frequent but unnecessary companions of appearances from actual underlying formal causes.

Irreducible complexity is a phenomena in formal systems, even as pure structures entirely closed and outside of time, irrespective of parallelism, a possible but not necessary consequence of lots of interacting elements. It is not an illusion of system definition or how you cut x off from the rest of the world around it, does not depend on underlying stochastic anything, does not depend on empirical instantiation, does not depend on time, does not depend on interaction. It arises at the level of software, when a certain degree of internal flexibility or cross-wiring interaction is reached and then repeated.

The threshold for it is quite low - in recursive function theory it appears with "while" loops, in CAs it arises already in the elementary nearest neighbor case, in TMs it arises with 2 states and 5 colors, and perhaps with as few as 3 colors, in number theory it arises in integer Diophantine equations - which, in case it isn't obvious, are not an open system interaction process in physical time. The idea that all these other forms of empirically complex "hair" are necessary for "real" complexity is simply untrue - it happens in their absence quite nicely, thank you very much.

And when the formal causes of it are present in some empirical case, it will arise there too, for exactly the same formal reasons it arises in purely formal structures. Without caring even a smidgen for philosophic conundrums about empirical instantiation or real time -ness or subsetting out of the "everything connected to everything else" universe. If the formal structure of relations among a system's parts is isomorphic to a formally complex, irreducible structure in the abstract, then those relations will be irreducibly complex in the real world too. That is why complicated software instructions make chips do complicated rather than simple things, while simple instructions make them do simple things.

There is such a thing as program-ability (or universality). We didn't invent it, nature has been doing it all along. But having discovered it (first in elaborate formal constructions of our own, specifically designed to have it as a property), we are now able to recognize it in all sorts of external phenomena. And to distinguish it, critically, from merely habitual associations and vague ideas about the specialness of this or that.

Wolfram recently and others before him, from Church and Turing on, have shown enough that it is by now clear that irreducibility does not require most of the things previous schools of thought imagined complexity needs. He may be right that program-ability and complexity coincide, or there may be a band of irreducibility shy of program-ability. But one certainly does not need *more* that universality to get to complexity. And getting to universality is very fast - a hop and a skip - and does not require the whole soup of pet ideas peddled by previous complexity notions. It might be even easier to get to complexity than to get to universality - that is the class 3 problem - but it isn't harder.

I hope this is interesting.

Report this post to a moderator | IP: Logged

Old Post 08-26-2004 04:12 PM
Jason Cawley is offline Click Here to See the Profile for Jason Cawley Click here to Send Jason Cawley a Private Message Edit/Delete Message Reply w/Quote
Paul Pomeroy
AdaptiveView

Registered: Aug 2004
Posts: 4

Jason,

Thanks for the response and for the time you put into it. It's appreciated.

I hope this is interesting.


I think it's very interesting. Although I'm uneducated (perhaps woefully) in regards to much of what you say, with a few exceptions I'm able to follow the gist of what you're explaining. I'll have to learn much more before I'm able to really understand some of your arguments.

There are two statements of yours, though, that I'd like to comment on:

1. Jason Cawley writes: A more basic error is the idea that complexity arises automatically from lots of parts interacting. There are just as many elements in rule 250 as in rule 30, and they interact according to the same rule *form* (nearest neighbors, 2 colors, deterministic, etc). But one behaves in a complicated way and the other does not. Nor does rule 30 require a long initial condition to do something complicated. One might think lots of things happening in parallel is still necessary, though not sufficient. Lots of things happening eventually, sure - that follows from the definition of "complex". But "in parallel", no, not necessary. Complexity arises sooner and more easily with lots going on at once. But e.g. a Turing machine or even a mobile automaton can produce complicated behavior from a single active cell.


Such comparisons between rule 30 and rule 250 (or 90 or 254) are misleading, I think. If one looks at the 8 boxes (sub-rules) for these rules and the examples shown between pages 24 and 27 there are (at least) 2 significant differences:

1) For rules 90, 250 and 254, there are only 5 sub-rules that ever apply (but all 8 sub-rules are applied for rule 30), and

2) Rule 30's form does differ from these other rules in terms of left-right symmetry. Note that the input "triplets" in the boxes are symmetrical (around the middle cell) for boxes 1, 3, 6 and 8 but not boxes 2, 4, 5 and 7. Note also that boxes 2 and 5, and 4 and 7, are equivalent (they're flipped versions of each other). What rules 90, 250 and 254 have in common is that these flipped pairs (2 and 5, 4 and 7) each produce the same output -- i.e., they set their applicable cells to the same color). Rule 30, on the other hand, does not. Specifcally, the output colors in boxes 2 and 5 are different. This introduces a non-symmetrical "behavior" in the output. I haven't gone through all of the rules but am willing to bet that any producing "complicated" output has a different output in either its 2/5 or 4/7 rule pairs or an asymmetrical initial state.

2. Jason Cawley writes: Then we get the idea that time is essential to complexity, that it is a matter of "process" rather than "structure". There was a philosophic school built around making much of this supposedly essential distinction. It is certainly true that a rule can be simpler than what that rule does when it is nested, or that the guts of a loop can be less than the loop. But this essentially involves the idea of repeating an element, not the idea of time. Formally, it does not matter what is repeated and in what dimension or direction. Over and over again happens in a crystal in space, and in a sequence in time. It is the over and over that matters, not the direction. Otherwise put, any complex phenomenon can be encoded into a structure and a purely formal, abstract one at that. A purely formal abstract structure is off in some platonic realm of ideas or softwareland, not in time.


What I find in this statement is a confusion between recipe/result and process. I think, too, that perhaps there is a difference between "complicated" and "complex" that isn't being stated here. My view of this is that, for example, Rule 30, the hardware/software that "runs" it and the result (the particular arrangement of black and white cells) that is produced are complicated but the process of running the rule and producing the output is complex.

I would also say that the process of you looking at the output from Rule 30 is complex (very complex) but that it is a mistake to atrribute any of this complexity to the black and white pixels on your computer monitor. At this point, I really can't comprehend this idea of "static complexity" -- it strikes me as being as impossible as "static energy."

Report this post to a moderator | IP: Logged

Old Post 08-26-2004 10:58 PM
Paul Pomeroy is offline Click Here to See the Profile for Paul Pomeroy Click here to Send Paul Pomeroy a Private Message Click Here to Email Paul Pomeroy Edit/Delete Message Reply w/Quote
Jason Cawley
Wolfram Science Group
Phoenix, AZ USA

Registered: Aug 2003
Posts: 712

Of course rule 30 and rule 250 are different rules. They are however rules of the same form, making use of the same range, number of colors, etc. Complexity is not a result of the set up therefore (or it would be there in 250 like cases as well), but of the specific things rule 30 (and others like it) do within that set up.

As for the symmetry point, you might conjecture that anything symmetric would be simple. If you restrict yourself to 2 color rules that might be true, but it isn't true in general. Assymmetry is not the cause of complexity. It can make it arise more readily - just as processing many cells in parallel in a CA can give rise to complexity more readily than processing one cell at a time in a mobile automaton. But it is not necessary to complex results. Nor are things like irreversibility, or a regular lattice, or an unchanging number of elements, or discreteness.

A New Kind of Science is a search for complexity onset, for a boundary between simplicity and complexity, as part of an investigation of the cause of complexity. That means you take some obviously complicated things and see what they have in common, sure. But that gives rise to scads of misleading possibilities. It is not yet critical analysis, just some initial intuition to guide where one might look. To isolate the sufficient causes of complexity, you have to systematically strip away accompanying characteristics, and see if you strip away all complexity too, as a result.

See, any thesis that x is the cause of complexity entails a formal conjecture, that all systems that lack x will be simple. It can therefore be refuted by a single construction or exhaustive search result, that exhibits a system that is still complex without x being present. Any such exhibited system is a counterexample to the hypothesis that x is the cause of complexity.

One starts with very simple systems, so simple that they cannot give rise to significant complexity. One adds just a wee bit more computational sophistication - more colors, a wider range, whatever. And very soon, sooner than you'd have guessed, there it is. It is a threshold effect, not a "proportional to x" effect. And it doesn't need all sorts of guesses people have thought it would at this time or that. The result is one finds a bunch of "simplest complicated critters" of type A, B, C... Which together show that things more involved than these very simple examples have in them, cannot be necessary for the occurrence of complexity.

Wolfram then conjectures a remaining common characteristic in even these reduced examples. He thinks they are all capable of universal computation (or its finite state analog). He shows that lots of other things aren't necessary for complexity. He shows and argues, I think pretty convincingly, that universality is sufficient for complexity. The remaining wiggle room, the remaining uncertainty about the cause of complexity, has been crushed.

All that is left is a modest possible gap, that universality might be sufficient for complexity but not necessary for it, if e.g. the class 3 systems are complex (which they are, clearly, other than a few "crackables" at the class 2 to class 3 boundary) but are not yet universal. This remaining wiggle room is still there because no one has proven class 3s capable of universal computation - and because doing that in general is hard. Below the class 3s, there is going to be simplicity only. The universals (pretty obviously, all of class 4 will fall here) are going to be capable of complex behavior (because they are capable of any finite algorithm's behavior).

As for your difficulty understanding complexity as a stand alone, abstract thing, it is easily formulated in terms I think you will understand. There isn't any finite algorithm that significantly compresses the evolution of rule 30. The only way to see what the thousandth step is going to be from some initial condition, is to do the entire calculation the system itself does. Any (finite, anyway) mind, computer, processor, that looks at it, is going to face the same irreducible computational task.

Just as in software complexity theory we can speak of the difficulty of a task independent of the hardware platform we perform it on, we can speak of the computational complexity of any algorithm run, independent of the details of the system that runs it. Some tasks are "inherently sequential", we say. It isn't the physical instance that is complicated. (If it were, whether it were complicated or not would depend on the details of the physical instantiation). It is the formal critter, the software, the abstract mathematical object itself. Already.

Wolfram points out that the computer processing the result (e.g. us) only has access to the space of computations reachable by universal computation. When the system does, too, the fastest way to its results is a "one to one and onto" mimicry of its internal evolution. An irreducible computation is one that is not "wasting" a lot of effort "recomputing" the same result, or inessential intermediate results not actually necessary to arrive at the eventual output. When we can reduce a computation, it is because we can reduce such wasted effort. If it isn't there, we can't cut it out.

Examples may clarify things. I can reduce rule 250 because of the regularities within it. The value of the center cell from a single black cell initial is simply the step number mod 2. Knowing the color of step n is therefore as simple as determining whether n is even or odd - which I can do very rapidly. But I can't reduce rule 30, and tell you the center cell at step n by any such shortcut. I have to calculate a diamond shaped portion of the entire pattern down to step n. Because any cell in the pattern at step n/2 can still effect the value of the center cell at step n, and can effect it in a non-trivial way. This is a math-like fact about rule 30 in the abstract, and is entirely independent of the platform or who is looking at it - just as a theorem about triangles is independent of any physical triangular-ish shape.

I hope this helps.

Report this post to a moderator | IP: Logged

Old Post 08-27-2004 03:02 PM
Jason Cawley is offline Click Here to See the Profile for Jason Cawley Click here to Send Jason Cawley a Private Message Edit/Delete Message Reply w/Quote
Paul Pomeroy
AdaptiveView

Registered: Aug 2004
Posts: 4

Jason,

Thanks once again. Yes, it does help some but probably not to the extent, or in the manner you may have hoped for. One of the things your posts in this thread (and what I've read in at least the first few chapters of Wolfram's book) indicates is that there is some imprecision (ambiguity) in how the words "complex" and "complicated" are being used. I don't at all mean to imply that someone is at fault, here. The words are, in general usage, overlapping to a significant degree -- something clearly indicated by the fact that "simple" is an antonym for both of these words.

It appears to me that both you and Stephen use the words interchangeably, referring to the results of some rule being processed at one point as being "complicated" and another as "complex." Given the subject of NKS, it would seem appropriate to have these words more precisely defined. This is hardly a matter of semantic preference. There are different things at play here and giving them the same name is confusing (and, in fact, indicative of an ambiguity in how complexity is being thought of).

My intuition, for lack of a better word, is that anything having to do with "difficult" should fall within the semantic space of "complicated." Something, then, is complicated to the extent that it is difficult to describe, or predict, or represent, and so on. Removing these characteristics from "complex/complexity" makes it easier to talk about complicated things that are not the product of complexity -- as a completely random pattern, for example, may be. It also allows more readily for the idea of complex "things" arising from uncomplicated ("simple") rules. This seems to me to be the point Wolfram is making, after all -- that "complicated" and "complex" are two different aspects that, while related, are not two sides of the same coin. What's puzzling to me, then, is why the words continue to be used interchangeably.

--------

You refer to my "difficulty understanding complexity as a stand alone, abstract thing." That is not exactly what I said, though. I said I had difficulty understanding it as something static. You seem quite satisfied with referring to inputs and results as complex. Maybe what's more important here is the location of complexity. You, in a sense, say it's a matter of "where." What I'm trying to show (and doing a poor job of it, it seems) is that it's a matter of "where and when."

I don't know if the following is going to help or hurt, but try this "thought experiment:" (Each of the following paragraphs ends with a question. In order for the experiment to work, you should answer the question before reading the next paragraph.)

You're given a sealed black box that on one side has a numeric keypad (with an Enter key) and a single white light. There are no instructions or descriptions accompanying it but you soon discover that entering a number (it accepts any number between 1 and 999,999) results in the light sometimes turning on for one second and that entering the same number always produces the same result but there is absolutely no discernible correlation between the number entered and the light turning on. Given these facts, is this box a complex device?



Now, let's say that by some extraordinary stroke of luck you realize that if you use the number you've entered as an index into the results of Rule 30 (with the number representing the step) and then look at the middle cell for that step then the light turns on whenever that cell is white. Once again, given these facts, is this box a complex device?



Let's say that you now find a way to open the box and find the light is controlled by a computer and that you can display the program that is taking your numeric input and, as appropriate, turning on the light. Assuming you will now state that this is certainly a complex device, and turning your attention to the rules and algorithms on the computer, are these necessarily as complex as Rule 30 (and the CA "run" software)?



The problem is, when you open the box you don't find a computer and software. Instead, you find a large disk with lots of tiny holes in it. With a little experimentation you discover that when you enter a number the disk is rotated to a particular position over a sensor and that when the disk stops rotating such that a hole is over the sensor it turns the light on. In terms of its complexity, how does this compare to the "computer and software" version in the previous paragraph?

--------

Here's another one. Two people are given the task of creating a graphic design (to be displayed on a computer) with a grid of "spots" that flash on and off. One writes a program that paints the spots black and then white. The other creates the attached JPEG image.

Which is more simple? Most complex? Does your answer change if you are not looking at the image?

Paul Pomeroy has attached this image:

Report this post to a moderator | IP: Logged

Old Post 08-27-2004 09:09 PM
Paul Pomeroy is offline Click Here to See the Profile for Paul Pomeroy Click here to Send Paul Pomeroy a Private Message Click Here to Email Paul Pomeroy Edit/Delete Message Reply w/Quote
Mike Lin
MIT
Cambridge, MA

Registered: Nov 2003
Posts: 14

Paul,

In regards to your thought experiment. I think it is an example of an objection to NKS that can be expressed in a way more paradigmatically aligned with Wolfram. Specifically: there exist examples of systems with behavior that looks complex but is actually simple. So, looking at the behavior of a system, how do we know whether it is REALLY complex? We discussed this previously on an old thread,

http://forum.wolframscience.com/sho...s=&threadid=129

I believe another word Wolfram uses that you may want to consider is "sophistication", e.g., "Phenomena that appear to us complex can be viewed as computations of equivalent sophistication." Here Wolfram refers specifically to the computational power of the system, whereas your sense of "complicated" seems to encompass more. So e.g. Wolfram would say that an idealized Pentium is no more sophisticated than Rule 110, but by any intuitive sense of the word, it's more complicated.

-Mike

Report this post to a moderator | IP: Logged

Old Post 08-28-2004 01:54 PM
Mike Lin is offline Click Here to See the Profile for Mike Lin Click here to Send Mike Lin a Private Message Visit Mike Lin's homepage! Edit/Delete Message Reply w/Quote
Mike Lin
MIT
Cambridge, MA

Registered: Nov 2003
Posts: 14

Jason,

Re:


Wolfram then conjectures a remaining common characteristic in even these reduced examples. He thinks they are all capable of universal computation (or its finite state analog). He shows that lots of other things aren't necessary for complexity. He shows and argues, I think pretty convincingly, that universality is sufficient for complexity. The remaining wiggle room, the remaining uncertainty about the cause of complexity, has been crushed.

All that is left is a modest possible gap, that universality might be sufficient for complexity but not necessary for it, if e.g. the class 3 systems are complex (which they are, clearly, other than a few "crackables" at the class 2 to class 3 boundary) but are not yet universal. This remaining wiggle room is still there because no one has proven class 3s capable of universal computation - and because doing that in general is hard. Below the class 3s, there is going to be simplicity only. The universals (pretty obviously, all of class 4 will fall here) are going to be capable of complex behavior (because they are capable of any finite algorithm's behavior).


One still has to consider the possibility that there is no threshold effect for universality at all, so that universality is not that common even among class 4's. You and I discussed this tangentially on a previous thread, but I was wondering if you could sum up (or point me to your previous summation of) Wolfram's evidence to the contrary.

-Mike

Report this post to a moderator | IP: Logged

Old Post 08-28-2004 02:20 PM
Mike Lin is offline Click Here to See the Profile for Mike Lin Click here to Send Mike Lin a Private Message Visit Mike Lin's homepage! Edit/Delete Message Reply w/Quote
Paul Pomeroy
AdaptiveView

Registered: Aug 2004
Posts: 4

Mike,

Thanks for the observations and the thread link. The question of what is or isn't really complex is certainly part of what I'm concerned about. Regarding "sophistication," it's partly in the same semantic arena as complex and complicated but it seems to me that it carries too much of a negative connotation (e.g., one definition for the word is "a deliberately invalid argument displaying ingenuity in reasoning in the hope of deceiving someone"). Sophistication seems more applicable to political campaign ads than computations.

I was doing a little "dictionary surfing," looking for other possibilities and came upon this (for the word "intricate" - from Webster's Revised/Unabridged):

Syn: Intricate, Complex, Complicated.

Usage: A thing is complex when it is made up of parts; it is complicated when those parts are so many, or so arranged, as to make it difficult to grasp them; it is intricate when it has numerous windings and confused involutions which it is hard to follow out. What is complex must be resolved into its parts; what is complicated must be drawn out and developed; what is intricate must be unraveled.

In the same dictionary, one finds the following definition for "complex:"
Involving many parts; complicated; intricate.

One can conclude from the first quotation that complex things need not be complicated or intricate, but this is contradicted by the second quote.

I don't want to get hung up on word games, here. I mention this only because I find it indicative of a general (widespread) confusion -- one that's been around for awhile. I'm aware that the definition in the above for "complex" is a bit "old school," too. What's being discussed here in this thread (I think!) and what's discussed in NKS has to do with the complexity related to why "the whole is greater than the sum of its parts."

Your statement,
"Here Wolfram refers specifically to the computational power of the system, whereas your sense of "complicated" seems to encompass more."

tells me that I still haven't done a very good job of expressing myself. The "computational power" of a system is in the vicinity of how I view complexity. As for "complicated," it's not a matter of encompassing more or less but of encompassing something different.

Where Wolfram is pursuing a "bottom up" approach, looking for the simplest things that can be called complex and the "rules" for how that complexity arises, my interest in it has come from the opposite direction. I started looking at "systems" and "complexity" in the early '90s when I was writing automation software used to control IBM mainframe computers. Since then, I've been refining my view of these, taking various large scale events and seeing how my concepts of "system" and "complexity" fit in, adjusting them when they didn't. Mostly, I've been looking at things that have "gone wrong," trying to understand how misperceptions about systems and complexity have played a part. More recently, I've added "information" to the list as there are certain types of snafus that are most easily explained as stemming from misperceptions about what information actually is (e.g., the craft that crash landed on Mars because part of the software was dealing with thrust data in Pounds and another in Newtons).

What prompted me to start this thread is that, in skimming through Wolfram's book, I was finding indications of the type of thinking about systems, complexity and information that are common in "things that go wrong." The cornerstone of this type of thinking has to do with projecting our subjective experience out on the world and trusting that this is how the real world works. (One might say, in fact, that the scientific method succeeds to the extent that it corrects this mistake.) I see a clear sign of this in the confusion between what requires complexity in us to understand and what is complex (regardless of our understanding). I think that, for example, calling the output from Rule 30 complex is incorrect.

What I was aiming at in the beginning of my last post, but never got to, is that "complicated" should refer exclusively to how difficult something is to describe. Describing requires a describer and, if the description is to be of any use, a receiver that can make use of the description. It is, then, a doubly subjective methodology. Complicatedness therefore becomes a subjective measurement. Two people can look at the same sequence of numbers and one can say it is very complicated while the other says it is very simple (i.e., not very complicated) and both of them can be right. Something's complicatedness can vary, but changes in its complicatedness do not have any effect on the thing itself. Taking this further, the complicatedness of a thing has no bearing on its complexity.

Returning to the output of Rule 30, Wolfram correctly states that it is complicated. But he also calls it complex and he infers, at least to the degree that his choice of words appears arbitrary, that these are to some large degree synonymous. Complexity, though, is a constant within a system. Apparent changes in something's complexity are merely apparent; in reality if something's complexity appears to change, what has actually changed is the thing itself. This distinguishes complexity and complicatedness in that complexity is an objective quality, intrinsic to the thing it is in and independent of any observer's view of it. One finds complexity in a thing's capability and capacity to do something that the parts it is composed of cannot do. This is another way of stating that in a complex system the whole is greater than the sum of its parts. If two parts, neither of which can do x, are connected such that the connected parts can now do x then there is in this system (this thing with its two connected parts) complexity and that complexity may be measured by x.

While I am saying that there is a lack of clarity in Wolfram's terminology, I am not saying (or even implying) anything about the actual approach he has taken, the results he's produced or what may result from his approach. I haven't read enough of his book to reach any conclusions along these lines. I will say, though, that the lack of clarity is a bit troubling in that it may indicate there is an underlying problem with the approach.

-Paul

Report this post to a moderator | IP: Logged

Old Post 08-29-2004 10:12 PM
Paul Pomeroy is offline Click Here to See the Profile for Paul Pomeroy Click here to Send Paul Pomeroy a Private Message Click Here to Email Paul Pomeroy Edit/Delete Message Reply w/Quote
Jason Cawley
Wolfram Science Group
Phoenix, AZ USA

Registered: Aug 2003
Posts: 712

Complicated and complex are defined in terms of each other in any English dictionary. Erecting grand philosophies on imaginary and private distinctions between them is hair splitting, semantics, and exactly the sort of word game you say you want to avoid. Nor are the imaginary distinctions between what is subjective and what is objective, or between doing something and being something, fruitful in this context. They simply beg lots of entirely disputed philosophic questions, rather than illuminating well understood and uncontroversial computer science ones. Your philosophical hang ups, which the rest of us are under no obligation to share or even care about, are not important to the nature of complexity and what gives rise to it.

As for Mike's question about class 4s, it is a legitimate question whether class 4s in general are universal. But the strong evidence that they are comes from the frontier of universality proofs and the way those operate, as a ratchet. See, universality is generally proved by showing a system can emulate some other system (or system class) already known to be universal. (The origin of the chain back in Turing machines needs other principles, but is already well covered).

The more systems one has shown are universal, the more candidates there are to emulate, any one of which suffices to show one's candidate system is universal. Before NKS, one needed to show a system could emulate an arbitrary Turing machine, or at least a TM with many colors and head states. Now one only has to show a system can emulate rule 110, or a 5 color 2 state TM, etc. And it is just extremely unlikely that scads of class 4s won't be able to emulate an elementary CA with just nearest neighbors and just 2 colors. Almost every system you would care to look at has more internal computational resources, more flexibility, than 110.

Back on the subject of complexity and the rule 30 example, there isn't any way of looking at the center column of rule 30 that makes it simple. I don't care if you record it in a look up table. You won't get it except from that process, and to get it from that process you will need to calculate a diamond shape's worth of substeps from an origin to full width at the half way mark and back to narrow again on the next to last step - at a minimum. The chance of getting that exact sequence by any other process, for any appreciable length (like your million step example) is zero. It is irreducibly sequential, computationally irreducible. You can see the minimum boolean expressions involved on page 618. It is by any measure what the dictionaries refer to as "intricate". You can call that "febulated" rather than complicated if you enjoy private word meanings. It won't change the phenomena one iota. Semantics are sterile, and semantic objections to entirely ordinary terms are simply private attempts to regulate other people's language. Wolfram is not confused about the subject and it is silly to pretend he is. He just doesn't need your offered distinction.

All that said, I will say a word or two about subjective-objective and being-doing issues. Not that I think they are particularly important in this context, but I do think that systematic fixation on them, and often loose philosophy related to them, gets in the way of understanding some issues in NKS. First of all, properly speaking there aren't "subjective" and "objective" "things". The terms each apply to everything; they are components of analyzed experience in idealist philosophies. Outside that context, their meaning becomes imprecise, sometimes to the point of incoherence.

An experience or an appearance or a phenomenon, can be said to have subjective and objective sides or aspects or moments. But a thing can't be said to be either, certainly not to the exclusion of the other. This is true even of entirely mental events, fictions, imaginings, etc. (There is an objective side to the novel "Alice in Wonderland", and a subjective side to a thought about two plus two). Nor does the distinction entail anything about the supposed truth or lack thereof of either aspect. Objective is not a synonym for true. It just means what stands over against the subject. In typical idealism, it is thought of as a constructed ensemble of properties projected by something that thinks. Subject does not mean imaginary; it just means what underlies an appearing. In typical idealism, that means some projecting mind. Nobody is required to be an idealist, needless to say. And I for one am not.

As for beings and doings, events are beings. They are extant. Being is itself a doing, not an abstracting of something out of doings. An essence can be abstracted from instances, but being refers to instantiation of an essence, not simply to an essence.

A being can be complex. Nowhere is it written that only other verbs ("doings") have that privilege. A complicated form is often referred to as a structure. Properly speaking this ("structure") refers to an essence (math-like or software-esque), or to a shape, isomorphic to the relations among the elements of some complicated thing. Elements below the level of the structure in question are treated as counters and not analyzed further. A structure is a relational map. As an essence, a structure says nothing about its instantiation. It therefore doesn't have anything to do with "doings", in a sufficiently strict sense.

In the past, epistemology typically made much of the distinction between subjective models and (typically unknowable to some degree) externals. The former were supposed to be reducible to logic in the end, and were therefore instrinsically regarded as fully understandable or simple. But there was no actual necessity in this. As a fact, reducing something to logic is not equivalent to fully explaining it or to making it simple. Because logic need not be simple. Enough of it, intricate enough, is as complex as anything else in existence. This was obscured by implicitly only considering models that were entirely simple, not only in their rules but in their outcomes or behaviors, or more generally by restricting oneself to trivial cases.

Under this impression that the model equals reduced to logic equals simple equals internal, epistomology mapped any complexity experienced to externals or to inside-outside gaps or to physical instantiations rather than logical thoughts, to the real as opposed to the mathematical. And therefore treated as the critical problem, how we get from a complicated outside thing we are trying to make inferences about, to our simple internal models.

This scheme was simply wrong about where complexity arises. Complexity arises even in the mathematical, the logical, the internal, the essence-ee. The philosophic scheme above implied that logic (or "math") should be simple, but it isn't. It can be as hard to understand the consequences of a purely logical model as it is to understand the behavior of an external physical system. Even with full knowledge of the internal state, definitions, and rules of transformation of that logical model. Something additional remains, to get to a full understanding of what the critter will do. That something is the actual computational work.

Systems that involve an irreducible amount of computational work to understand, are difficult to understand, regardless of whether they are external or internal, instantiated or abstract, hardware or software, physical or logical. The actual distinction "complex" "simple" does not lie along the grain suggested by past epistomological fashions, but athwart it. There are entirely simple abstract logical systems, and arbitrarily complicated ones (with just as few elements and allowed relations, even). And there are entirely simple and predictable external systems (in Popper's terms, "clocks") and intricate ones (Popper's "clouds").

Complicated systems are difficult to understand for the same reason in both cases; simple ones are simple to understand for the same reason in both cases. The old distinction is a red herring in this context. It does not illuminate the specific nature of complexity. Instead it merely distracts from that specific nature. Which is computational, and already there on the logical-internal-abstract side of the ledger. It does not arise from internal-external gaps. If it did, it wouldn't show up until such gaps had been crossed - but it does show up, before they are crossed, in pure mathematics.

The real "gap" that matters is between the specifications of the system or the statement of its rules, and its actual behavior or the outcome of repeated application of those rules to many elements. The former can be simple without the latter being simple. This does not mean the latter "only appears" to be complex. On the contrary, it means it is complex. But that complexity does not arise only as a result of complex causes, but also can have simple causes. There is no stable map from the simplicity of a rule to the simplicity of its results. The intuition to the contrary is simply wrong (intuition typically dies hard when it is wrong, and here it is simply wrong). Rules of extreme simplicity are as complicated in their results as anything in the universe.

This rules to behavior distance - which is spanned by actual computational work - arises on both sides of the ledger. In abstract systems or pure mathematics and logic, and in physical ones, that compute the outcome of simple programs in the course of their actual evolution. Not just systems designed to do so, all sorts of natural systems as well. The rule to behavior relation is what complexity is about, as computational irreducibility or necessarily sequential computational work. In dictionary terms, "intricacy". Intricacy is not an artifact of some limited way of looking at a system, it is not an illusion, and it is not a semantic shell game. It is simply a natural fact, a recurrent real phenomena, in systems - artificial or natural, abstract or physical, makes no difference.


Report this post to a moderator | IP: Logged

Old Post 08-30-2004 06:01 PM
Jason Cawley is offline Click Here to See the Profile for Jason Cawley Click here to Send Jason Cawley a Private Message Edit/Delete Message Reply w/Quote
Post New Thread    Post A Reply
  Last Thread   Next Thread
Show Printable Version | Email this Page | Subscribe to this Thread


 

wolframscience.com  |  wolfram atlas  |  NKS online  |  Wolfram|Alpha  |  Wolfram Science Summer School  |  web resources  |  contact us

Forum Sponsored by Wolfram Research

© 2004-14 Wolfram Research, Inc. | Powered by vBulletin 2.3.0 © 2000-2002 Jelsoft Enterprises, Ltd. | Disclaimer | Archives