A New Kind of Science: The NKS Forum > NKS Way of Thinking > Explain NKS in 10 minutes or less
Author
Todd Rowland
Wolfram Research
Maryland

Registered: Oct 2003
Posts: 103

Explain NKS in 10 minutes or less

At the conference NKS 2004, I was asked if I could explain NKS to someone who hasn't read the book in ten minutes or less. We had a short discussion in the computer room, and I later wrote up my explanation in notebook form, which is the attachment to this post.

I'd be interested in seeing other people's short summaries, possibly covering other aspects of NKS than what I covered here.

Attachment: nks-intro.zip

Report this post to a moderator | IP: Logged

06-04-2004 12:28 AM
MikeHelland

Registered: Dec 2003
Posts: 179

What I told a friend with no scientific or mathematical background or interest:

See this row of boxes? Only one is colored in right?

Whats going to happen is I'll change the row so that the only boxes that are colored in are the ones that were previouslly to the left of a colored in box.

So how will the row look when I'm done? Right, the colored box is shifted over one.

And if I do it again? Right. If I stack all the new rows underneath the previous one, I get this line looking thing that goes off to the right.

Pretty simple, eh? Well, using the exactly same technique but with slightly different criteria for which boxes are colored in, I get different results.

Look at how random this one is? (Rule 30)

Using these same ideas we can actually create leafs and shells and those sorts of things we find in nature, so we think the universe really might work like this at some level.

Report this post to a moderator | IP: Logged

06-04-2004 07:07 AM
Jason Cawley
Wolfram Science Group
Phoenix, AZ USA

Registered: Aug 2003
Posts: 712

If I only had ten minutes with someone unacquainted with the subject, I don't think I'd try to explain what a CA is or how it works. I'd stay more general than that.

The successes of modern science have largely been built on mathematics, finding relationships in nature that follow the various forms described by mathematical functions and equations. Many problems can be addressed directly this way. With some problems, involving large numbers of interacting elements, we can extend the same methods by focusing on only some overall properties we are interested in - averages and other statistical measures - provided the relationships among the elements of the system are simple enough. We leave out a lot of other details this way, but get to keep using mathematical techniques. But with some systems, these details matter even for the eventual overall behavior. And traditional mathematical methods therefore don't manage to crack them, really.

Wolfram looked for generalizations of mathematics that might be able to get further, with such cases in mind. The idea is to use computer programs rather than mathematical equations to describe the formal relations within the system. In an overall way, this is a familiar idea from computer modeling. Wolfram then asked the question, how simple can these programs be, and still generate the various sorts of behavior we see in natural systems? Do they need to be highly involved and specific? Math was useful because the same simple form - a line or a repeating wave e.g. - was seen over and over in system after system. Simple things tend to be general things, and what we know about them from other cases and formally can thus be applied in other places too. Can programs simple enough that could be true of them too, generate complex behavior like we see in nature, especially the sort traditional math wasn't letting us crack?

Rather than starting with very complicated computer models and trying to make them marginally simpler, Wolfram started at the other end. He took some of the simplest possible computer programs. And he enumerated whole types or classes of them. And then looked through those types to see what sort of behavior they could generate - much as one would look through conic sections or trigonometric functions or the simplest differential equations, expecting to find patterns general enough they could occur in a wide range of real systems.

He thought he'd have to dial up the allowed complexity of these classes of programs to get any interesting new behavior. He found, instead, that even the simplest full class of cases he looked at - a class of rules called cellular automata - already included resulting behaviors as complicated as anything he'd seen. This might have been something special about the first rules he looked at. So he examined a large range of other rules, each different in set up and details, dropping feature after feature of the original class. Sometimes he had to allow a few additional relations among elements, but he found the same variety of behaviors again.

The range of behaviors reached by simple programs thus has two surprising characteristics, found empirically in computer experiments. One, they can behave in surprisingly complicated ways. And two, the ways they can behave are remarkably stable across changes in the underlying rules, and don't seem to depend on those details in any fundamental way. Any sufficiently involved set of rules for simple programs can generate the same large classes of behavior - evolution to a fixed point, stable cycles, nesting, seemingly random complexity, and complicated interaction of localised structures - for some choice of the particular rule. With the details of the patterns seen sometimes also dependent on the initial conditions, sometimes largely insensitive to those.

The idea is then to use simple programs to model behaviors in nature that exhibit similar patterns of overall behavior. Including some that have proved hard to address with standard mathematics. Wolfram showed a number of simple examples of this - in crystals, in fluid flows, biological forms and pigmentation patterns. And he is convinced it is much more general than those, a basic way systems natural or formal can and do behave. Perhaps including fundamental physics.

Wolfram then asked in effect why this should be so, what underlying formal facts might account for it? What is it that even simple programs are already able to do, that allows them to generate such complicated behaviors, and that makes the same sorts of behavior crop up again and again in system after system? What do they all have in common, despite their detailed differences? Wolfram ties this back to significant formal results in the 20th century, about the idea of universality in a formal system. Which came out of investigation in the foundations of math, and in turn helped make the computer revolution possible.

Universality is what lets a practical computer with a single hardware set-up emulate all kinds of behavior by varying its program or software. The program corresponds to the initial conditions. And the hardware corresponds to a particular rule. A rule qualifies as universal if there is some way of programming it aka some set of initial conditions, that can get it to produce the output of any other computable system. And what Wolfram found is that even programs not explicitly set up to have the property of universality do in fact have it. And not highly complicated ones unlikely to occur anywhere but in systems we engineer - as was already known - but in even the simplest classes of rules. So simple, we can readily imagine natural systems behaving according to them.

So Wolfram speculates that the reason we see the same complex behavior in so many classes of system, is because they are all across the threshold of universality. They are systems of equivalent internal sophistication, as it were. You can't limit what they are going to do beforehand without putting constraints on their initial conditions, because with full choice of those you could program them to behave in now one way, now another. He thinks the variety and complexity we see in natural systems reflects the same formal variety he saw among simple programs. And that even some of the simplest of these already have enough going on, that they could account for the most sophisticated things we see.

If this is right, it would provide a basis for a significant extension of science, from systems behaving according to simple mathematical relations as we've used in the past, to systems behaving according to simple programs. And this extension would take us all the way to the most involved behaviors we see, without needing to use programs with enourmously complicated, special set-ups or thousands to millions of lines of special code. To make this happen requires a pure study of computer programs parallel to the study of pure mathematics in the past. And an experimental attitude toward what simple programs can do - since it can be arbitrarily hard to see what even a simple program might do, just by looking at its rules.

The gist of it, then, is that it turns out computer programs can go farther than traditional math as a formal basis for science, and one can get a lot just from the simple ones. The new "pure mathematics" of future science based on this idea, will be the formal study of the behavior of simple programs. The new "applied science" of this idea, will be using simple programs to model all sorts of natural systems - and potentially also to engineer all sorts of technological systems.

We didn't invent computing. For a few decades we thought so, because the only systems we knew about that did, were systems we specifically engineered to do it reliably, on set tasks, and in ways predictable for us. But then we learned a bit more about computing, including how simple it can get and still meaningfully be called that. And noticed that nature has been computing - if seemingly arbitrary things - all along (just as planets had been moving in ellipses long before Kepler noticed). So in a sense, we find retroactively that we discovered rather than invented computing, or that we have just adapted a natural phenonemon to our purposes. With NKS, we have discovered the correspondance between what we thought we had invented not long ago, and what nature has been doing all along.

Report this post to a moderator | IP: Logged

06-04-2004 03:27 PM
MikeHelland

Registered: Dec 2003
Posts: 179

Jason, while your explanation might be good for a mathematician it would have been completely lost on my pot-smoking buddy.

Doesn't the program just take the place of a mathematician? I mean, whether your model is written in equations which a human computes, or written as a program that computes itself you're doing the same thing. Computation existed even as a mathematical human activity before the discoveries you speak of.

For what its worth I do agree that programs are more elegant formalizations for many models which in turn improves the intuition we have towards creating models, but that the programs allow us to do something entirely new? Not sure. Look at what Goedel was able to say using logical systems. Isn't that really just NKS?

Report this post to a moderator | IP: Logged

06-04-2004 05:35 PM
Tony Smith
Meme Media
Melbourne, Australia

Registered: Oct 2003
Posts: 167

NKS in the History of Ideas

The pervasive climate of naive negativity towards NKS is largely attributable to issues of style, not substance. Many who have invested careers in niceties of academic discourse are reflexively antagonistic to Wolfram's choice of methods for getting his ideas out there, despite/because he speaks their language well.

It may turn out in the fullness of time that NKS will be seen as symptomatic of millennial madness. It is grounded in our generational preoccupation with digital computing and digital communications and with proprietorial approaches to products of the intellect, all of which are prime candidates for black boxing as utility services sooner rather than later.

However NKS does try to land a telling punch for atomism against the continuum and that battle is unlikely to be concluded any time soon, especially given the magnitude of investment in mathematics of the presumed continuum. It is an Olympian battle which goes back at least to ancient Greece in which even the momentous discoveries of such discrete elements as chemical atoms and DNA base sequences has failed to stem the continuum tide in mathematical analysis.

Language is by nature discrete, including the language of formalism. Even numerical methods are discrete though they are mostly presumed to approximate the continuum. Godel showed that any practically useful formal system could not tell us all that was true within even its own scope. Wolfram shows us that "simple programs" acting together can produce a class of results that are inaccessible to the mathematics of the continuum.

NKS goes to great lengths to demonstrate that simple programs expressed in a diversity of formal systems can produce "equivalent" results, in particular Turing's idealised computers for which important mathematical results have been previously determined. Wolfram also suggests that evolving networks are the best candidate model for a fundamental discrete model of spacetime, as increasingly do others coming from very different directions. Yet the clearest and most common demonstrations of NKS are provided by cellular automata, not because there are any credible claims that CAs are a candidate model for anything very general but rather because CAs are by far the easiest to handle for both human perception and current computers.

Wolfram is unashamedly materialistic but escapes the trap of confusing determinism with predeterminism by powerfully invoking the much studied notion of computational irreducibility, showing that a large fraction of his simple programs are computationally irreducible, so that the only way to see how things turn out is to let them, and the universe, run their course. This steps around the whole question of free will, but in a way that can be easily portrayed as amoral fatalism. However his analysis of computational irreducibility provides a strong grounding for that other great practical success of mathematics, statistical analysis, as well as at least a candidate approach to getting the observed resilience in the world to emerge from previously fragile computer programs.

At one level it may me easiest to see NKS as the kind of conference paper where an idea is floated, but in this case an idea big enough to have consumed more than a decade of the life of a modern genius and which thus demanded 1200 pages to even start to lay out. It is a starting point for a project which if it gains critical mass is likely to continue across decades or even across generations.

While NKS is grounded in Wolfram's leadership of the Complex Systems journal and the development of Mathematica software, it does not subsume either. NKS seems particularly unconcerned with some of the general principles of complex systems across disciplines which emerged during a brief interval of fashionable attention around the late 1980s. Rather than destabilise the accepted hierarchy of related intellectual activity, Wolfram seems to very deliberately position NKS as yet another specialisation in which simple programs become the object of study rather than their traditional role as source of models to be cherry picked by established disciplines. Of course this is not an either-or position.

Beyond promoting A New Kind of Science, Wolfram focuses the NKS book on his arguably premature claims for a Principle of Computational Equivalence based on his well informed anticipation that a broad class of simple programs will be demonstrated to exhibit computational universality, the capacity to emulate each other originally identified by Turing. There are a range of views as to how important PCE is to the rest of the NKS enterprise, based on different readings of the book.

NKS should not be seen as a grandiose argument that the universe is a computer, at least not in the sense defined by the movie The Matrix and a few maverick theoreticians, though PCE might still allow that as a possibility. There is an important distinction to be drawn between atoms that compute and computers that emulate atoms, a distinction which is not often drawn clearly. Wolfram's "simple programs" do not require a computer running them in parallel at each element, nor even an all powerful computer running them serially so as to create the illusion of time. They are just what stuff does, and a way for us to model it on our general purpose computers.

__________________
Tony Smith
Complex Systems Analyst
TransForum developer
Local organiser

Report this post to a moderator | IP: Logged

06-05-2004 04:35 AM
Brian Silverman
MIT, Playful Invention Company

Registered: Nov 2003
Posts: 5

I agree with Jason that it's better to be general in a quick description of NKS. I find that there's a popular misconception that the main point of NKS is that the universe is a cellular automata. Talking about cellular automata won't help dispel this misconception.

I think the following Wolfram quote, not however from NKS, provides a great starting point for a quick description.

"Four centuries ago, telescopes were turned to the sky for the first time -- and what they saw ultimately launched much of modern science. Over the past twenty years I have begun to explore a new universe -- the computational universe -- made visible not by telescopes but by computers"

NKS is about exploring the computational universe. In particular it's the beginning of a systematic study of simple programs. This is really a new kind of science. Sure, programs have been studied in detail in many different ways. However, we've only studied programs that have been carefully constructed to produce particular kinds of results. Exploring computation "in the wild" is something quite new.

Wolfram found that the kinds of behaviours exhibited by simple programs seem remarkably invariant of the nature of the underlying rules. Even the simplest rules can produce the most complex kinds of results. This was surprising, but ultimately not too surprising because it echoes and builds on Turing's work on computational universality.

Another surprise is that almost no matter what the rules are the ultimate behaviour falls into one of a handful of equivalence classes: the result is either stable, random, or complex. There are only a small number of classes of behaviour resulting from almost any simple program. This leads to the conclusion that simple programs could very well be a good way of modelling nature. Wolfram pursues this and produces dramatic results in biology, chemistry, hydrodynamics, and elsewhere.

So why cellular automata? They are a kind of simple program that are extremely simple to understand and at the same time exhibit behaviours no less complex than systems with more sophisticated rules. But that's not the point. CAs are just a simple way in to the broader exploration of the computational universe.

Report this post to a moderator | IP: Logged

06-05-2004 03:50 PM
Fiona Maclachlan
Manhattan College
Riverdale, NY

Registered: Oct 2003
Posts: 11

The choice of topics for a ten minute presentation would have to depend on what the audience already knows. If they are unfamiliar with cellular automata then I think you would want to discuss how they work and show, with pictures, how they can give rise to the four classes of behavior. It's important to understand what's involved in a simple program and the quickest way to learn would be with examples.

On the question of what general ideas to present, I would put computational irreducibility ahead of universality, especially if the audience didn't have much of a background in computer science. The idea of CI is accessible to anyone with a general science background and is crucial to understanding why the CA findings point in the direction of a new kind of science.

Another reason to cover computational irreducibility is that it appears to have escaped the notice of seemingly serious reviewers of the NKS book. Unfortunately, people without much time are likely to rely on these public intellectuals for their opinions.

Steven Weinberg in The New York Review of Books stresses the importance of discovering fundamental laws of nature whose predictions can be tested against experimental data but nowhere addresses Wolfram's argument that these predictive laws are possible only if the system one is studying is computationally reducible. Weinberg classifies NKS as a "free floating theory" and says that "to justify applying one of these theories in a given context you have to be able to deduce the axioms of the theory in that context from the really fundamental laws of nature." Melanie Mitchell in Science similarly misses the idea of computational irreducibility when she suggests that Wolfram's claims are subsumed under dynamical systems theory and "particularly the subset often known as 'chaos theory'." (It's as if the reviewers' copies were missing pages 737-750 ... )

Report this post to a moderator | IP: Logged

06-16-2004 05:52 AM
Mike Lin
MIT
Cambridge, MA

Registered: Nov 2003
Posts: 14

I've given a few brief presentations to people around MIT. I understand this is not an average audience, but here is how it usually goes:

1. Assume: the Church-Turing thesis is true. A universal machine is at least as "sophisticated" as any other. (Note that there are machines that are not universal)
3. CLAIM: there exists a hierarchy of the sophistication of computing machines, with a trivial function F(x)=0 at the bottom and universal machines at the top. There are perhaps many levels in this hierarchy.
4. Somewhere along this hierarchy lie two thresholds. The first is a "threshold of complexity", where the systems have reached sufficient sophistication to produce computations that appear complex. The second is a "threshold of universality", where the systems have reached sufficient sophistication to produce universal computation. The threshold of universality is somewhere above the threshold of complexity.
4. The complexity of extremely simple systems (like Rule 30) SUGGESTS that the threshold of complexity is very low, a nontrivial discovery.
5. Furthermore, the universality of extremely simple systems (like Rule 110) SUGGESTS that the threshold of universality is also very low, a surprising discovery.
6. CLAIM: The threshold of complexity and the threshold of universality are in fact so close together that a complex system is "almost" surely capable of universal computation. That is, there are "practically" no systems that lie exactly between the two thresholds.
7. It then follows that practically all complex computations are equivalent in sophistication; that is, "internally" capable of universal computation.

I think that captures my understanding of the core theoretical principles of NKS. Now we move on:

8. CLAIM: Processes in nature can be viewed as computations. In particular, complex natural phenomena can be viewed as complex computations.

The Principle of Computational Equivalence then follows.

My experience is usually that everyone immediately rejects all three central claims. I am not really inclined to disagree on the evidence available; Wolfram frequently relies on his decades of gained intuition to support them. But I hope this frames the debate in a unique way.

Last edited by Mike Lin on 06-18-2004 at 09:03 PM

Report this post to a moderator | IP: Logged

06-18-2004 01:27 PM
Mike Lin
MIT
Cambridge, MA

Registered: Nov 2003
Posts: 14

I dug up a pretty stupid figure I made a while back to go along with that. The caption is:

"As computing machines get more complicated and powerful, there are certain thresholds past which complex behavior and universal computation become possible. Cellular automata suggest that both thresholds are quite low, so that complexity and universality arise commonly. The two thresholds are furthermore conjectured to be very close together, so that any systems capable of complex behavior are also very likely to be capable of universal computation."

Mike Lin has attached this image:

Report this post to a moderator | IP: Logged

06-18-2004 02:09 PM
Jesse Nochella
WRI

Registered: Mar 2004
Posts: 132

A Stern Outlook on Teaching People New to NKS

If I were given 10 minutes to explain NKS to someone for their first time, I would take a few things into consideration:

First: The initial formal descriptions of NKS are best forgotten. All most people are going to recall from them are usually profusely warped interpretations of what NKS really is.

Second: Once they are satisfied with their description of what NKS is they can relax and assume that they are not interested in it.

Third: If they are really, really curious, what they are asking for then is to be lost, and maybe even upset. If they are really curious as to what NKS is all about, ( I feel I'm talking more about the general public here than the scientific community, of which are not expecting a translation of present ideas but instead a set of new ones) what they are really looking for is the feeling of burning curiosity ignited by things beyond what they presently understand.

Taking these things into consideration, ideally, my idea of a presentation made to impress(mentally) would as be quick, and precise as possible. Every concept would be presented as something that would be very difficult to even just deny(which is easy with NKS because most of the ideas can be backed up with gigabytes upon gigabytes of rock solid experiments). More specifically I would jump over all primers and introductory information and tell them, from a philosophical view, exactly what NKS has to do with life, philosophy, the universe, mathematics, anything and everything.

At times I've been angry with what I feel are limits given on different kinds of mathematical principles. Later I feel enlightened when I understand what those limits are about, and the reasons as to why they have to be there. I remember one time believing that undecidability and irreducibility were choppy limits! And that they simply could not be true! Well, well, well. Now the very same principles are tools!

I'd want to jump directly to these kinds of topics for their apparent controversy in a presentation. People don't know what to make of it, disagree with it, but always remember it because they can never prove it otherwise. It's worth all the mental discomfort in the world to finally understand something. I may be wrong, but I would bet that anyone who finally comes through with understanding after a long time of mental discord would say that the time they spent rejecting it was a waste of time, and then leave it at that.

The entire ten minute explanation would be all about claims set up for controversy, challenge, and denial. Claim there can be no way of telling what the past was, as well as the future; how three planets in space can emulate a computer, a flower, a human mind, and even another universe. Claim how the configuration of those planets are no more complex than a single atom and whether they believe you or not, they will be impressed, and remember what you said.

I think that the real skill in talking to new people about NKS comes not from transforming the information you already know into a different form that someone else can understand, but instead from knowing that your ideas are good enough that in any form, they can deeply impress someone, challenge their beliefs, possibly drive them into a new direction of persuit. I'm not saying it will, but it's really not a bad thing if it does. And if what they're curious about is really whether NKS can do that to them, why not give them the seed crystal to expand on that they're looking for?.

Last edited by Jesse Nochella on 06-30-2004 at 03:16 PM

Report this post to a moderator | IP: Logged

06-29-2004 07:17 PM
Todd Rowland
Wolfram Research
Maryland

Registered: Oct 2003
Posts: 103

These are all pretty good. It is interesting that they differ so much.

I was thinking about the need for a new intuition to understand simple programs and NKS, and how that compares to what people say about quantum mechanics. Since that is something people are already familiar with, it might be a good analogy for explaining NKS.

About Quantum Mechanics, often one hears that it is important to think differently, to ignore ordinary physical intuition, when considering the quantum reality on the smallest scales.

We could be explaining the parallel with NKS. One must think differently about the simplest rules, ignoring the ordinary intuitions about size and complexity, and ignoring ideas about how we make programs work.

Its potential should not be measured in its ability to perform traditional tasks.

Other parallels:
QM began as a purely intellectual enterprise, with a few interesting physical examples, just like NKS. Perhaps NKS has more physical examples, though early NKS is arguably just as commercially uninteresting as early QM. [until the semiconductor? nuclear power?]

They were both turn of the century sorts of things, which went on to heavily influence the rest of the century. QM began in the early 1900's (Plank 1900) and NKS came out 2002.

Attached is my original post converted into pdf.

Attachment: nks-intro.zip