wolframscience.com

A New Kind of Science: The NKS Forum : Powered by vBulletin version 2.3.0 A New Kind of Science: The NKS Forum > Applied NKS > Most successful application thus far?
Pages (2): « 1 [2]   Last Thread   Next Thread
Author
Thread Post New Thread    Post A Reply
Jason Cawley
Wolfram Science Group
Phoenix, AZ USA

Registered: Aug 2003
Posts: 712

It is fair to note the historical tendency to think of nature in terms of man's latest tech, and specifically to compare present focus on the computational aspects of nature to prior focus on the mechanical aspects. But of course, mechanics are much more than a metaphor - they remain the core of physics. It is not a mere analogy that physical systems are mechanical systems - whatever else they are.

Similarly with computation, it is not a matter of chosing an analogy. There is a specific formal fact that makes general computation possible, and it is what I am referring to when I say it goes on in natural systems. That fact is the independence of specific computations from the detailed underlying operations used to carry them out; otherwise put, the instruction set does not have to change for the computation to change.

The reason we can make general purpose computers is not simply that nature is orderly and follows rules. It is, instead, that there are small sets of underlying operations available, which can perform any feasible computation by just being interleaved in different ways. You don't need to add a new instruction to the set to reach a new space of desired behaviors. You can instead just interleave the elements of the instruction set differently.

Not every set of possible instructions or underlying primitive operations have this characteristic. Early on in the history of computing, it was thought almost obvious that only sets of instructions specifically designed to have this property, would have it. But that intuition was incorrect. Instead, once we fully understood the phenomenon (called it "programmable flexibility"), we found more and more formal systems that exhibit it, including much simpler ones that expected. And we started seeing instances of systems in nature that easily get to the threshold of internal complexity required, and show a similar range of resulting possible behaviors.

Looking at complex behavior in nature, one might think in each specific case that there has to be something special going on in the particular components that make up that instance of complexity - say in fracture, or dendritic patterns, or turbulence - but this is almost certainly incorrect. Instead the same formal fact probably underlies them all, that these are systems internally complex enough to support programmable flexibility, and they therefore naturally show the same range of behaviors (or behaviorial classes, if one prefers) as general computation.

Just as systems with widely different underlying components can all behave as fluids because they follow the same conservation laws in their mechanical motions, or be described thermodynamically becuase the same statistical laws apply regardless of components, systems over the threshold of computational sophistication will show the characteristic range of behaviors seen from computationally sophistication (aka "universality", up to some limit idealizations).

This tends not to be appreciated by researchers with little internal knowledge of how practical computers actually work, or the principles that allow general purposes computers with unchanged and simple underlying instruction sets. If one thinks of artificial computation as a "black box", with human ingenuity inputs and outputs meant to be significant to human thought "post processing" only, one can miss the underlying formal point.

Find the underlying instruction set used in the natural instance under examination, and then formally test the limits of what different interleavings of those instructions can do. If you find they can do anything - any feasible computation - then you know that the possible complexity of that natural system is "unbounded above". You might still constrain the input space, or find all sorts of regularities in the outputs, at a suitably coarse grained or ensemble level. But the detailed output behavior may depend on the detailed initial conditions, according to an involved ongoing computation as complicated as anything possible in our universe. When that is the case, it is not a metaphor to say that the natural system in question is computing. It is just a more exact understanding of what it means to say of anything that it "computes".

Report this post to a moderator | IP: Logged

Old Post 12-08-2007 05:58 PM
Jason Cawley is offline Click Here to See the Profile for Jason Cawley Click here to Send Jason Cawley a Private Message Edit/Delete Message Reply w/Quote
tomjones


Registered: Not Yet
Posts: N/A

You are correct to a point, but it is also the case that the more one abstracts a system the more complex the model becomes, since the distance between the way the model works and the way the actual system works is so large.

When it comes to current computers, or as you put it artificial computers there is nothing simple about the instruction sets. I am not sure if you have ever written a computer instruction set or read one for the intel processor that your computer probably uses, they are extremely complex. They are constantly changing instruction sets and adding more instructions to them, for example the current intel instruction set is something like 2000 pages long when written out. In fact there is nothing simple in current computer design and this is one of the problems just how inefficient current computer design is becoming.

As to the question does computing gets us closer? It is I would say up for debate, since computation no matter how general your notion, if it is the formal one that has developed, I would argue the abstraction is still to far away from the way actual systems in nature work. Now I am quite aware that this is always the case and one is always mapping one system onto the other but one it would seem could get closer to the way things work. Ultimately as soon as some new technology whatever replaces computers comes out NKS will be re devised with that new technology at its base, if history is a guide.

"It is a formal phenomenon, which occurs in nature as well as in engineered systems. It doesn't need any intention to arise as a phenomenon - it is simply a formal property of all sorts of systems, including natural ones."

This statement is partially false of course, computation needs intention or purpose to arise, if you don't believe that try taking apart a computer putting the pieces in a bag and then shake them up and see how long it takes for a computer to pop out. This is of course ridiculous and it is ridiculous even in simple cases. I find it amusing since none of what NKS has "discovered" could be done without computers which are carefully engineered or without carefully organized searches, there is nothing unintentional about NKS methods. Whether computation is at the core of things or not it will never be a purposeless phenomena or one that just happened to come about.

"But even without any such arranging, nature has been computing all along - we just didn't know enough about what computation was, to notice it, or to think of it in those terms. "

False, this is not the case nature was arranged to compute if you want to use that term, computation is an arranged phenomena. Noise times noise always equals noise if you start with nature as random or unintentional you will never get computation out of it. This is a formal fact of computation and the necessity for certain properties to be there for computation to take place. For example even simple systems like CA cannot happen by accident or without design there is a formal way they work and a formal design that they follow.

Thanks for the reply

Last edited by on 12-09-2007 at 05:03 PM

Report this post to a moderator | IP: Logged

Old Post 12-08-2007 09:30 PM
Edit/Delete Message Reply w/Quote
Paul B. White


Registered: Nov 2007
Posts: 12

Jason,

I'm getting the impression that NKS people assume that the computer which generates the universe (call it the "fundamental computer") must be of the general purpose type; i.e. capable of doing all kinds of various and sundry computations. Is that a fair assessment? If so, I would like to point out a few problems with that.

At the fundamental level, I would argue that the universe is not so complex that it needs such a flexible computer to generate it. In fact, the basic output of the fundamental computer may only need to be a few symmetries and couplings, plus the action quantum. The rest of what we experience (e.g. space, time, particles, daffodils, etc.) could simply be fields that emerge from those things. So, if the stuff of the universe is fundamentally simple, why would nature choose to use a general purpose computer to produce it, when it could use a simpler, less “expensive”, special purpose computer for the task?

I mean, it would be shocking if Hewlett-Packard specified a Pentium to do the computations for a simple adding machine. Likewise, I think it would be shocking to learn that nature uses a general purpose computer to generate the fundamental stuff of the universe. IMHO, it would be an example of profligate waste and inefficiency at the heart of the universe.

I think a lot of NKSers are too enamored with the notions of general purpose computers and universal computation -- and this could be a stumbling block to seeing how things work at the fundamental level, since nature doesn't care about the aesthetic appeal of those things to humans. Rather, it is a good bet that nature cares only about doing stuff in the simplest possible way, whether it fits our aesthetic expectations or not.

Report this post to a moderator | IP: Logged

Old Post 12-10-2007 07:12 AM
Paul B. White is offline Click Here to See the Profile for Paul B. White Click here to Send Paul B. White a Private Message Click Here to Email Paul B. White Visit Paul B. White's homepage! Edit/Delete Message Reply w/Quote
tomjones


Registered: Not Yet
Posts: N/A

Paul, you are missing one key point no matter how you think about things, particles will come about early on and if you look at particle physics things get quite complex quite quickly. Further if there is a computer of some sort at the center of the universe, which I doubt, it is unlikely to look like any computer we would recognize, but that system no matter what it is, will have to generate an enormous amount of complexity, very early on.

The other thing to take note of is that Wolfram seems to believe that the fundamental stuff of the universe may well be a network of connections that change based on patterns of connections.

I think the point of NKS making to much of computation is a valid one but it plays into a point I already made of NKS fitting the classical historical role of the most complex technology being like nature, just like people used to think the world was a giant clock.

Thanks

Report this post to a moderator | IP: Logged

Old Post 12-10-2007 02:12 PM
Edit/Delete Message Reply w/Quote
Jason Cawley
Wolfram Science Group
Phoenix, AZ USA

Registered: Aug 2003
Posts: 712

Paul - yes simplicity is likely to be a good guide. Wolfram suspects that even the initials of a fundamental rule will be simple - or at least, that we won't find it unless the initial and rule are both simple, a significantly easier claim. But the underlying rule is likely to be universal.

Why? Well, if every phenomenon nature shows were simple, then I'd agree with your argument, that universality would be "formal overkill". But instead we see all classes of behavior. And we know which class of formal system exhibits all classes of behavior, and it is not the simplest class.

The math of traditional physics stresses all sorts of symmetries, in part because those make it much easier to solve things. But while we see gobs of symmetry at a fundamental rule level, the universe is not remotely simple, symmetry, periodic, fixed point-ee, or anything of the sort.

And we have formal precedent for systems evolving according to simple underlying rules, with plenty of low level structure and regularity caused by the form of the rule, that nevertheless generate elaborate complexity and variety, even from very simple initial conditions. They are computationally irreducible underlying rules.

That is enough to suspect that the underlying rules, whatever they are, will be at least that high in the computational behavior scale. Whether irreducible and universal usually coincide, is of course an open NKS question, and Wolfram's PCE amounts to the conjecture that they will. If that is true, then the previous will extend to "the underlying rule is probably universal".

Which doesn't mean that it will have some incredibly elaborate initial, exploiting that inherent flexibility. You can start a universal rule off simply. When you do, you still get lots of its inherent potential variety, since subpieces of the evolution pop up here or there in the pattern, that mimic different initials (or more strictly, pieces thereof). That's the idea. It is a possible explanation for the mix of variety and simplicity we see in nature.

It could be wrong, of course. It is a conjecture, and a fairly heroic one, as extrapolations go. But we have evidence against overly symmetric and simple alternatives, from the lack of perfect simplicity or regularity seen in the history of the universe.

Now, sure, some alternate explanations have other ways of generating that from relatively simple formalisms. But if they invoke objective chance to get there, they haven't really explained it. Instead they underspecify a universe. You can get a nice symmetric ensemble of possibles and just let objective chance pick one, but then the symmetry is entirely imposed, really.

I suppose I should add that you are right that too much should not be hung on universality alone. If someone seeking fundamental computational rules for physics just relies on an argument of the form, "rule X is universal, so it has some possible initial condition in which a computation parallel to the universe can in principle be found", that is a weak argument.

The NKS book does not do this. Instead, Wolfram considers and rejects numerous possible computational schemes as unpromising, when known physical relations or symmetries would be too artificial in that set up. E.g. straight CA models do not naturally produce spatial isometry (they tend to have preferred directions along lattice "grains"), so they are out. He wants a set up in which curved space is natural, relativity works, etc. In that sense, you are right that a focus on known simplicities has to guide any search for a computational model for physics.

FWIW...

Report this post to a moderator | IP: Logged

Old Post 12-10-2007 03:41 PM
Jason Cawley is offline Click Here to See the Profile for Jason Cawley Click here to Send Jason Cawley a Private Message Edit/Delete Message Reply w/Quote
Jason Cawley
Wolfram Science Group
Phoenix, AZ USA

Registered: Aug 2003
Posts: 712

tom - Well, your conventional view on the subject has the merit of showing by contrast that the NKS book has a definite result, and not one that was (or is) intuitively obvious to most people. And it is understandable - the intuition we get from engineering is that one only gets a lot out of a system if one put a lot into it. What cures this intuitive mistake, though - and it is a mistake - is the practical question that Wolfram started from: what do entirely random examples of simple computer programs actually do?

Not programs designed to do something or other. Just simple ones. Not carefully engineered searches designed to find something that will satisfy condition X - just exhaustive search, aka looking at every single one of a small enough class. And no, one does not have to shake the things forever to find anything non-trivial. On the contrary, the simplest rule sets ever looked at already contain instances of arbitrary complexity, and even of proven universality.

Modern instruction sets get somewhat involved to meet various engineering desires, especially to make it obvious to human programmers what is going on. Also to make certain common computations the simple ones in the instruction set used, instead of requiring moderately elaborate sequences of more primitive operations. But no new possible computations are reached thereby. All that happens is that a permutation is done on the same set of computables, and the number of steps needed for some goes down (while that for others goes up).

That general computation is possible at all, that the idea of a fixed instruction set works, is not dependent on anything running thousands of pages. On the contrary, it was noticed that the limited idealized instructions of lambda calculus, Turing machines, and arithmetic, all reach the same space of possible computations.

One might have still thought at that point, and most did, that only systems designed to mimic processes of human reasoning, could have the property of universality. But today we know better - there are just far too many very simple systems proven universal by now, which no one "meant" to have that property. It didn't take a billion monkeys a billion years to find them either. All that was necessary was for someone to ask the question: what do typical simple computer programs, that haven't been picked for any specific reason or designed to meet any desired objective, actually do?

That is the issue standing on the doorstep of NKS. Whether you are interested enough in the answer to step inside, well, that is entirely up to you.

Report this post to a moderator | IP: Logged

Old Post 12-10-2007 06:41 PM
Jason Cawley is offline Click Here to See the Profile for Jason Cawley Click here to Send Jason Cawley a Private Message Edit/Delete Message Reply w/Quote
tomjones


Registered: Not Yet
Posts: N/A

Thanks for the reply...

I would agree that whatever the fundamental model of nature is, it is most likely simple, for the reason that any system that achieves the complexity of nature would necessarily need to have an efficient way of getting to that complexity which means not using excessive numbers of steps to get anyone place.

So the issue to my mind is not computational space reached but rather the efficiency of the computation. By this I mean for example if we take a classical Turing machine we could in theory encode all the operations that a current computer does, but computing a floating point operation on a Turing machine is wildly inefficient, thats why one uses specialized instruction sets.

The argument about universality and simple programs sounds to me a bit like saying why get a hammer to pound in a nail when I can use my shoe. But maybe this is because I am not quite getting the NKS view.

Further it seems to me that it is also an issue of is the complexity generated equivalent? Is the complexity generated by a simple program the same as the complexity we see in nature?

It seems to me that complexity generated by a simple program while it may be universal would have difficulty in dealing with a system where the output of the system has little to no resemblance to the underlying components?

For example we know that we are made up of atoms but those fundamental components and their behavior has really nothing in common with the final behavior of a human. So to take this back to NKS how would NKS deal with this can it deal with a system where the output of the system shares no obvious relation to the starting primitives, unlike CA where there is an obvious relation between the cells and the output behavior.

I hope this makes sense...

Thanks

Report this post to a moderator | IP: Logged

Old Post 12-10-2007 09:54 PM
Edit/Delete Message Reply w/Quote
Jason Cawley
Wolfram Science Group
Phoenix, AZ USA

Registered: Aug 2003
Posts: 712

OK then, I'll bite - what is the obvious relation between the behavior of the emergent particle patterns in say rule 110 - which occur as the "joins" between two regions each "tiled" by periodic backgrounds, of the same or of different types on either side of the "join" - and the behavior of the cells? Can you deduce the behavior of every particle-pair "collision" in a rule 110 evolution from the form of the rule itself, without carrying out the computer experiment of evolving an instance of that collision and seeing what happens? (See the example at the end of chapter 2, starting on page 31, if you don't know what I mean by interacting rule 110 "particles").

No, see, you can't. The behavior up at the combined-cell particle level has no obvious, intuitive relation to the form of the underlying rule. Even though it is a strictly logical consequence of that rule. The reason for the apparent difference in the two statements is that sometimes strictly logical consequences are involved enough that they are not at all obvious, and there is no faster way to figure them out than to carry out the evolution in question. Aka to do the same computational work the system does as it evolves.

In practice, just to enumerate the possible particles on a say 20 by 20 region of a rule 110 pattern, involves a large exhaustive search - and once each is found, another exhaustive computation to see how each interacts with each of the others, when it is just one of each - at each of a range of offsets or "phases" to each other. You are rapidly up to the scale of millions of interactions. Extend the pattern width considered to 100 wide, and the number of possible interactions rises exponentially. Wide enough (still perfectly finite), as it is more than the number of actual particles in the universe.

And that is just one instance. So we are never going to know them all by a priori reasoning. In practice, the status of how one class 4 particle interacts with another is an empirical issue to be determined by formal experiment, with basically the epistemological status of any other brute fact of "nature". We say, the emergent level follows its own logic. Not because it can't be determined from the bottom up in this or that specific case, but because doing so is so hard and inexhaustible, they are effectively independent subjects of knowledge. Just like chemistry and atomic physics...

Report this post to a moderator | IP: Logged

Old Post 12-10-2007 10:36 PM
Jason Cawley is offline Click Here to See the Profile for Jason Cawley Click here to Send Jason Cawley a Private Message Edit/Delete Message Reply w/Quote
tomjones


Registered: Not Yet
Posts: N/A

In a CA if you look at enough output you can deduce the rules that generated it. In nature, there are many systems that looking at the output you can't deduce the starting rule no matter how much output one has. I was speaking from the top down not from the bottom up. It is these type of systems I am speaking of, these don't seem to be classifiable in NKS, where one would need one set of rules and primitives that evolve to a point where they generate a new set of rules and primitives and so on.

I realize there is no way to know before hand what the result will be...


A bit of an aside, I just picked up Mathematica 6 and I must say I like it so much more then MATLAB which I have been using for years, and I have been playing with some NKS experiments...

Thanks

Report this post to a moderator | IP: Logged

Old Post 12-11-2007 03:10 AM
Edit/Delete Message Reply w/Quote
Paul B. White


Registered: Nov 2007
Posts: 12

Jason, I agree that the generation of our universe involves simple computations of some sort going on at a fundamental level (see my paper, "Particle Genetics and Expression" posted in another thread).

But I am confident that nature, like an engineer, is going to use the simplest system and components that will get the job done. General purpose computers are nice for running Linux, word processors, web browsers and spreadsheets, but the basic stuff of the universe is much simpler than that. Nature is not profligate, and so it will use just what it needs.

Take biological genetics as an example. Nature uses a sequence of exactly three nucleotides to specify an amino acid. A sequence of two would not be enough to specify 20 amino acids; and four would be a profligate waste. The minimum that is necessary to do the job is a sequence of three, and that is what nature uses.

Another important thing I think everyone is missing is that the system which generates the stuff of the universe is not going to be logically flat. That is, the components of the system are not all in the same logical plane. There is a kind of “z axis” in the universe's logical space, which produces a hierarchy of levels with concomitant dependence and independence relations among those levels. These built-in logical relations are the “rules” that do the computational work. They don't generate colored cells like a CA; rather, they generate fundamental couplings and symmetries. We don't need to specify these rules via software or initial conditions -- they are built in from the get go. So you don't need to feed in software from the outside and you don't need a “programmer” to write the code. This, I believe, is the apex of simplicity -- Occam's razor in its finest form, reducing everything to the bare minimum that you need to get the job done.

For evidence in physics of this underlying logical hierarchy, look at the different scopes of the fundamental interactions. As you go from gravitation to EM to the strong interaction, the scope gets progressively smaller. Gravitation has universal scope (i.e. it affects everything); EM has intermediate scope (affects a lot of things, but not everything); and the strong interaction has a very narrow scope (affects only a small band of things).

What does this scope pattern remind you of? It reminds me of the nesting of logical blocks within a proof (or, similarly, the nesting of statement blocks in a computer program). The initial premises in a proof have universal scope (i.e. logical priority to every other part of the proof), and can be asserted anywhere in the proof. But a nested assertion such as "Now suppose that blah blah...", forms a logical block that has limited scope within the proof; and another such assertion within that block forms another nested logical block with an even smaller scope; and so on.

The fundamental interactions exhibit this same type of scope pattern, and so it is reasonable to suspect that there is logical nesting going on, and thus a fundamental hierarchy of levels. These levels aren't separated in physical space and time, they are separated in a logical space, and, together with their concomitant dependence and independence relations, are available everywhere to do the computational work of generating couplings and symmetries directly, and (via emergence) fields such as space, time, and particles.

If you are interested in learning more about my system, read the paper “Particle Genetics and Expression”, which is attached at the beginning of the thread A candidate simple program for generating fundamental physics.

Last edited by Paul B. White on 12-11-2007 at 05:50 PM

Report this post to a moderator | IP: Logged

Old Post 12-11-2007 10:14 AM
Paul B. White is offline Click Here to See the Profile for Paul B. White Click here to Send Paul B. White a Private Message Click Here to Email Paul B. White Visit Paul B. White's homepage! Edit/Delete Message Reply w/Quote
RLamy

Paris, France

Registered: Nov 2007
Posts: 16

A reply to tomjones' last 2 posts

The meaning of the output is purely in the mind of the observer. In principle, you can use Rule 110 to render a photorealistic image of the Eiffel Tower starting from a 3D-model. The complicated, and at first sight random, pattern of gliders you will obtain as output will have exactly the same relation to the Eiffel Tower as the complicated pattern of bits (conventionally called a JPEG file) you would have obtained if you had chosen to use your computer with suitable software.

By definition, starting from any computationally universal system, you can build a tower of emulation of universal systems. At each level, you can completely forget about how the previous one works, remembering only the translation table between lower-level patterns and higher-level primitives. In that case, it is perfectly natural to ascribe to the lower-level patterns the meaning of the higher-level primitives.

The translation table can only be obtained through brute force computation. This means that going down a level is a hard problem even if you know what the lower system is. And remember that any universal system can emulate anything, so you can choose whatever universal system you want as the lower system. You can stare at the Eiffel Tower for as long as you want, you will never be able to discover rule 110, and looking at the Statue of Liberty or the Taj Mahal won't help either.

Last edited by RLamy on 12-11-2007 at 08:54 PM

Report this post to a moderator | IP: Logged

Old Post 12-11-2007 07:15 PM
RLamy is offline Click Here to See the Profile for RLamy Click here to Send RLamy a Private Message Edit/Delete Message Reply w/Quote
tomjones


Registered: Not Yet
Posts: N/A

Of course you still have the problem that you universal system still can't model many phenomena, so they obviously are not as powerful as you make them sound. They are even further hampered by that fact that just because it can emulate something does not mean that its a good model. For example I could make Rule 110 emulate a current computer, and it could, it just be fantastically slow and inefficient.

As to the meaning of the output being merely in the mind of the observer, that doesn't fly, for example I can say x + a / b^2 = 3.196828 is the size of a proton. So in my mind the output of this system is the size of a proton.. this means nothing and is utterly ridiculous. I think what you meant to say was that one can map one system to another and assuming a proper mapping the output of the system that emulating will represent the system being emulated.

Thanks

Report this post to a moderator | IP: Logged

Old Post 01-01-2008 09:13 PM
Edit/Delete Message Reply w/Quote
Luis Rodriguez


Registered: Nov 2008
Posts: 1

NKS and physical computers

Tomjones says:
"NKS is only possible because of the carefully designed digital computers."

NKS is a new math. Is a kind of programming languaje. It does not need a
physical computer. John Conway invented the Game of Life mentally and then played it with paper and pencil.

Naturally, the capacity of the game for modeling certain process needed the use of a computer.
Subsequently it was shown that it can imitate a Universal Turing Machine.

__________________
Ludovicus

Report this post to a moderator | IP: Logged

Old Post 11-14-2008 06:34 PM
Luis Rodriguez is offline Click Here to See the Profile for Luis Rodriguez Edit/Delete Message Reply w/Quote
Post New Thread    Post A Reply
Pages (2): « 1 [2]   Last Thread   Next Thread
Show Printable Version | Email this Page | Subscribe to this Thread


 

wolframscience.com  |  wolfram atlas  |  NKS online  |  web resources  |  contact us

Forum Sponsored by Wolfram Research

© 2004-14 Wolfram Research, Inc. | Powered by vBulletin 2.3.0 © 2000-2002 Jelsoft Enterprises, Ltd. | Disclaimer | Archives