A New Kind of Science: The NKS Forum > NKS Way of Thinking > Complexity definitions and measures
Author
Jason Cawley
Wolfram Science Group
Phoenix, AZ USA

Registered: Aug 2003
Posts: 712

Complexity definitions and measures

In another thread, a forum member asked about definitions and measures of complexity. It is a fine question. The variety of reasonable answers - see 557, 1068 - reflects the fact that complexity is a subject we are still learning things about, so our characterisations of what is essential about it still have "play".

Complexity is an attribute of a formal structure, pattern, or set of data, which marks the failure of our usual, general methods of perception and analysis to extract a short, comprehensive description. Its opposite is simplicity - the ability of some general method of analysis to express the content of the structure or pattern in a short form, involving far fewer terms.

A system may appear complex and still be reducible. If the reduction is fairly involved, not obvious, and highly specific to that system, for example. (A "crack" or decoding of that system, only). This is typically considered an intermediary case, having some of the characteristics of simplicity and some of the characteristics of complexity.

As with most system analysis questions, the level of description and characteristics chosen are a prior, independent given. Thus the coarse grained behavior of a system may be simple while its details are complex; its phase might be simple while the history of its temperature variations throughout space might be complex, its components might be simple but its overall structure complex, or its end state simple but its trajectory to that end state complex, etc.

How is it measured? First by a coarse grain classification - readily explained by a simple formula, for example, scored "0", or resisting even rough statistical characterization, scored "1". One can examine entropy measures, without trusting them too much. One can subject a set of data to various methods of compression and see whether they are significantly shortened. Logical depth is a decent first order approximation to what we mean by complexity - the number of logical operations needed to construct a structure from its description.

Anything produced by a simple rule of any kind, has some short description. But if that description is entirely peculiar to that system, then we would not call it simple on that basis. Simple forms recur repeatedly, in widely different contexts.

Thus the center column of CA rule 250 from a single black cell has a much shorter description than the whole evolution of that pattern - step number mod 2. There is a statistical model of the center column of rule 30 that is short - Random[Integer]. But that description will get the actual sequence completely wrong, only faithfully capturing is mean density and the size distribution of runs. We say, the average behavior of rule 30's center column is simple, but its actual explicit behavior, in detail, is complex.

How complex? Well, there is an apparently irreducible amount of computational work that must be performed, to find the correct answer to a question like, "give me the center cell sequence from t to t+n" - assuming no pre built look up tables built by already having performed the same amount of computational work. Modest factor speed ups might be possible, by e.g. composing two steps of rule 30 into one step of a more complicated rule, or similar schemes. But no log scale speed-up appears to exist. We say, the evolution is "incompressible", or the computational work involved is "irreducible".

Notice that the definition of complexity and a diagnosis of its cause are distinct questions, though obviously they are related. The NKS book proposes computational sophistication as an underlying cause of complexity. This is an attempt to explain an independently perceived phenomenon, by tracing it to another one that we know is sufficient to cause it.

In other words, Wolfram suggests that the reason we see complex things as different from simple ones, is that they actually do contain sufficient internal variety that they could, without altering their underlying rules, be programmed to impliment any finite algorithm. See 735 and following.

This is not a definition, it is a conjecture - one version of the principle of computational equivalence. There are various ways in which it could fail to hold. E.g.

(1) Complex looking systems might not generally be irreducible, instead having more or less difficult "cracks" that we simply haven't found yet. (I call that the "crackables" question - how far do reducible systems extend into the apparent class 3s?)

(2) Or most complex looking systems might be irreducible, but not in fact capable of universal computation. In particular, class 3 systems might not be universal. To prove that any example of a class 3 system is universal is one of the main open problems in NKS at this stage. (I call this the "class 3" question).

In that case (2), complexity and irreducibility might track each other, but fail to coincide with universality. Universality is sufficient for irreducibility, that much we know. But it may not be necessary, or typical. PCE conjectures that it is typical, but that has not yet been proved.

I hope this helps.

Report this post to a moderator | IP: Logged

11-23-2004 03:54 PM

wolframscience.com  |  wolfram atlas  |  NKS online  |  Wolfram|Alpha  |  Wolfram Science Summer School  |  web resources  |  contact us