Show all 19 posts from this thread on one page
A New Kind of Science: The NKS Forum (http://forum.wolframscience.com/index.php)
- Applied NKS (http://forum.wolframscience.com/forumdisplay.php?forumid=4)
-- NKS Implications for Practical Applications (http://forum.wolframscience.com/showthread.php?threadid=1070)
NKS Implications for Practical Applications
I'm interested in the insight of NKS experts, ideally Stephen Wolfram himself, on the following analysis. I am an AI researcher with experience in automated search, so my comments come from that context.
A fundamental insight of NKS is that a a very simple structure (i.e. a simple set of rules) can produce unbounded complexity. This insight is characterized as being exciting for science and engineering because it means that we might be able to "mine the computational universe" for these simple structures, and from them derive wonderful and complex processes and artifacts that would otherwise be impossible to discover or design.
Yet based on my knowledge of search, I would draw the opposite conclusion. The NKS insight, which is genuinely novel and interesting, seems to actually have negative implications rather than the positive ones touted.
The main reason is that "mine the computational universe" is another way of saying "search." That is, if there are indeed very useful simple strucutres out there that produce complexity, then our next task is to find them. Finding a particular instance among many possibilities means search.
It seems that the message of NKS is that somehow we are now at an advantage in this search by virtue of the fact that what we need to search for is smaller than we might have expected. In other words, in the parlance of search, the number of dimensions in the search space may be much smaller than we'd imagined. And that is supposedly good because that makes search easier.
Let's look at CAs for illustration without loss of generality. Wolfram points out that a few of the 256 8-bit CA rules have the special property of genuine complexity. Rule 30 is an example. However, there are two very important facts that are left out:
1) Most of the 256 rulesets of this size do _not_ have this property and thus are not like rule 30.
2) The particular complex system codified in rule 30 is only _one_ such system, and likely not the one we are looking for. For example, we may be looking for a building architecture or a musical piece. Even though rule 30 is interesting, it is probably not the rule that generates exactly what we want.
Fact #2 is not an endictment against NKS- it simply means that we must search for the rule that actually gives us what we want. Hence the importance of search, or "mining." However, fact #1 is highly concerning because it means that there are massive discontinuities in the search space.
In general, an effective search requires the search space to be roughly correlated and orderly. In other words, if I shift one bit in my ruleset, ideally the CA that results is discernably related to the one I had before. That's how search works- it's based on the assumption that taking one step doesn't land me in another universe.
However, it appears that in the world of interesting CAs, in fact taking one step _does_ land me in another universe. This is very bad news for search, because it means you cannot search this space systematically.
The fact that the rule set is small that produces rule 30 does not help. The cost in search is mostly through _evaluation_ rather than through the size of the ruleset dimensionality. That is, in order to test any rule, I must run the CA, and then convert whatever results into a simulation of the substrate I'm interested in. For example, if I am interested in building architectures, then I have to convert a run of rule 30 into its correlated building architecture and then run a series of tests for whatever criteria I care about. All that running, conversion, and testing means that for any candidate ruleset that I check, I have a significant computational expense, regardless of the size of the ruleset.
Well that's not particularly surprising, because that's exactly why people run search algorithms: They want to minimize the number of candidate solutions that must be checked.
Yet discontinuity in the search space implies that you _can't_ minimize the number of solutions you check because there is no meaningful relationship among neighboring CA rulesets.
In fact, the whole premise of NKS, i.e. that there are these diamonds in the rough, is actually an endictment against searching through such a space. In a good search space, the complexity of rulsets of the same size would be correlated, not wildly unrelated. When you moved up to a larger ruleset, you would get a commensurate boost in the complexity of the patterns that are generated. That would be a correlated landscape, the kind that is searchable and makes sense.
However, here Wolfram is saying look we have a completely uncorrelated landscape where among the very smallest rulesets there are totally idiosyncratic jewels that bear no relation to their neighbors, and no relation to the size of the ruleset.
The only way to search such a landscape is by pure random coinflips. There's just nothing else to go on. So that means if you have 256 rules and one of them is what you want, you will have to evaluate on average 128 rules!
That doesn't sound that bad, but imagine a space of 7 trillion rules. For example, Stephen Wolfram mentions that there are 7 trillion such CAs with 3-color rules. Ok, so let's say among those 7 trillion is exactly the rule we want to generate some amazing solution to a particular problem. Then, if blind search is the only feasible approach, we will end up evaluating about 3.5 trillon solutions before we find it. For any nontrivial problem, that will take until the end of the universe.
So I don't see how this can be useful in a practical sense. What would make more sense is if somehow you could generate more complex systems by adding more rules, i.e. by expanding from 8 to 16, or something like that. Then if the new, higher-dimensional space has a relationship to the old lower-dimensional space, you would be able to leverage what you learned in the 8-rule search in order to now step up to a new level of complexity. That's a logical, feasible search space with a meaningful heuristic. That's actually also how complexity is generated in most well-understood systems- you start simple and build up from there.
However, what NKS is claiming is that in fact the dimensionality of the ruleset is uncorrelated to the complexity of the solution, AND the complexity of the solution is uncorrelated among neighboring rulesets of the same dimensionality. In other words, the complex solutions lie essentially randomly distributed throughout the spaces (plural because there are spaces of different dimensionality) of rulesets.
So it seems like this all boils down to something perhaps interesting but not useful from a practical perspective. All it says is that if you blindly try out several trillion random numbers, one of them will win the lottery. Granted, that one that wins the lottery is in some sense incredible, but no principled method has been provided to reach it. In fact, on the contrary, evidence has been provided to show that it is likely there is NOT such a method (because the space is uncorrelated). Therefore, the implication is that there is no practical way to leverage this insight. All that we have learned is that magic is out there, though we will likely never find it.
"In general, an effective search requires the search space to be roughly correlated and orderly. In other words, if I shift one bit in my ruleset, ideally the CA that results is discernably related to the one I had before. That's how search works- it's based on the assumption that taking one step doesn't land me in another universe."
Might "taking one step ... land me in another universe..." because that step assumes an ordering an corelatedness that is inappropriate for the task at hand?
For example, if the order is nested, arguments that suppose a linear search space will at some point land you in another universe.
L. J. Thaden
I've been thinking about this some.
Realize that once we find a particular program, say rule 30, it's then possible to make a whole amusement park of variations. Rule 30's with multiple colors, rule 30's with wider ranges; rule 30's cutting through multiple dimensions, even rules that may do tons of other, uncomputed and unknown things when run but still in select cases produce rule 30 behavior.
All of the methods to do all of these things may not be already developed at the moment, but they are essential and must be done at some time.
Are our present methods of uncovering actually searching the computational universe as it will be done in the future?
The act of being aware of all the possible elaborations on a given theme reduces the numbers of configurations. That is just one way to do it.
To explain what I mean. Imagine we are able to calculate all of the 'upper' rules that produce rule 30 behavior to some definite effectâ€”that we can tell the similarity if one triangle is yellow and the other is blue, or that it is just being emulated through time in 2 dimensions or higher. Say that we can find all or just some of the ways that turing machines, SSS's, network systems, symbolic systems etc. can be made to produce rule 30 behavior. What then if I am to search for a program that produces randomness? Perhaps I want it to operate in a certain way so that the chip that i'm making can use it, or so that I might find how it is possible to evoke that kind of behavior in some nano-scale environment with my little pieces of matter. Then, as I search for such a thing, a connection can be drawn between cellular automata, or any other simple program, into the light of the real world of specific implementations, perhaps. Again, that's just one way.
A rudimentary advance in this direction would be to create a small mathematica program that tells you all the rules that produce a certain behaviorâ€”in say, cellular automataâ€”in their higher 'octaves' or whatever you would call them. All of the rules that produce rule 254 behavior regardless of color in all classes? Which rule number of a 5 color range 1 CA produces rule 30 behavior as an interaction between the 2's and the 3's when they are all that is put in the inital conditions? There is more than one since all of the other conditionals in the rule involving more than just those two colors can be anything. So how do we sample just from this space, easily?
There are a lot of things to do here. Has anyone done anything like this?
Laurence Thaden asked:
"Might 'taking one step ... land me in another universe...' because that step assumes an ordering an corelatedness that is inappropriate for the task at hand?
For example, if the order is nested, arguments that suppose a linear search space will at some point land you in another universe."
The deeper question is whether there any kind of correlation at all in the distribution of "interesting CAs" among all CAs in a particular ruleset dimensionality and among all ruleset dimensionalities? You propose "nested" correlation; I can imagine others as well.
If indeed Wolfram (or anyone) can demonstrate such a structure to the search spaces, then indeed you are right we could leverage it to our advantage and search by taking steps in the transformed spaces (i.e. transformed relative to the structure we know exists).
However, the evidence presented in NKS seems to imply on the contrary that there is indeed no structure or correlation at all. The point rather seems to be that you can find interestingness anywhere regardless of its dimensionality or configuration. That implies there is nothing of which we can take advtantage, no place to get a firm grip. It's just a lottery.
Now, if it turns out that a somewhat orderly structure can be demonstrated over the set of interesting CAs, that would be an exciting contribution. However, it seems to me that if that's possible then THAT should have been the topic of NKS, rather than the contrary point that these interesting things are just out there with no regard to search space dimensionality or structure. Wolfram in fact seems to be expressing excitement at the idea that there is no particular rhyme or reason to where you might find complexity. To me, that is bad news and not good news, and should therefore not be touted as progress. Progress would be the opposite: Show me meaningful structure among and inside the search spaces and I can then show you potential.
Various people have looked at the issue of where 3s and 4s, or just 4s, occur in various rule spaces, with low color CAs getting most of the attention. While there was some hope at first that there would be a simple answer that accounted for all the places complexity was found, that proved overly simplistic once more details were examined. The basic story is that it has structure, but the structure is not simple, and mapping it is as difficult as any other complex simple rule issue.
Langton's lamba was the first serious attempt. A simple single parameter based on how populated the rule table was, and on avoiding symmetries, was supposed to predict where the 4s in particular would be found, supposedly at transition points between well populated spaces of mostly 2s on one side, and mostly 3s on the other. But it predicted more segregation of 3s from 2s than was really observed. And it could not account for all 4s - some showed up where it did not expect them.
More recently, some NKS reseachers have made maps of CA classes based on partitioning their rules into "half rules", or more in higher dimensions. This effectively cuts up the possibility space into nested subcells that share partial similarities in their rules. Then cells within the map are colored according to the Wolfram class of the behavior actually seen. One finds that 3s and 4s cluster, being higher frequency along lines or frames through the space, with more symmetric cases that don't use the "right" half-rules much more likely to show class 1 or 2 simplicity.
But the map is not sufficient to predict which rules will be class 3-4 and which will be 1-2 in spaces not yet examined. It might improve your chances, if you looked at rules corresponding to 3-4 behavior in a lower number of colors or lower range "pre-image" of the rule you are looking at. This is intuitive enough if you think about it - imagine starting with rule 110 and then adding cases to its rule table for the new color "2". Clearly, it is already computation universal on a pattern of 0s and 1s. So for some initials, at any rate, it will show class 4 behavior - even with additional rule cases added that sometimes produce 2s.
As a rule space gets much larger, it becomes possible to stuff sub-behaviors off in a section of the rule's possible cases. Imagine a 6 color range 1 rule. It could have rule 30 like relations in the interaction of color 1 with 3, or color 4 with 5. Or rule 54 like interactions between colors 2 and 6. Etc. So clearly, some subrule information has to "carry upward" to "generalizations" of the lower level rules that are already giving complex behavior. Therefore, there will be continuing structure - has to be. But there will also be new cases of complex behavior that arise at the new number of colors (e.g.), where it was not there yet in "similar" (in the sense of "factored" rules or common specific cases etc) previous rules.
One can try to characterize the space of rules reached as you increase the computational resources of the system. It has been done for some things like CAs. Basically the 3s become more common. 2s never disappear, and 4s continue to appear but form a declining portion of the space (which is exploding in total size exponentially or more, as you add more colors etc).
If there were an easy way to tell by looking just at a rule table what the resulting behavior would be, then sure we'd investigate that and make a science of it. Instead we find diminishing returns rapidly, when we proceed with that expectation. We get a few useful things. There may be more and it is worth looking. But in the end, the way to tell if a rule does something complicated is to evolve it - to use an experimental not a deductive approach. And NKS tells us to expect that, that we should not expect all interesting phenomena in simple programs to be readily foreseeable, and should instead be ready to be surprised.
When Wolfram ran his first exhaustive CA search, he did not expect any of them to yet have enough going on in terms of colors and allowed connections, to do anything remarkable. They surprised him. To be surprised, you have to be willing to actually look where you don't know what to expect, instead of trying to deduce beforehand where you will find everything.
I hope this helps.
Jason, thanks, your post was enlightening. I am glad to hear there is some structure in the space. Of course it's not important that one be able to map the entire set of possibilities- that can't be done in any search space unless it's trivial. But in order to be searchable it must have some correlation, and your post suggests that at least CAs do.
I think this brings me to a more general question though. When computer scientists introduce new representations, usually the motivation is that the new representation somehow reorders the search space to be more easily traversible. In general, "excitement" about a particular representation usually stems from how it maps the the space of your target substrate. For example, a good representation of melody would map to the space of melodies in such a way that small steps in the representation space lead to songs of similar complexity and related themes. That would be a good representation for search.
Yet it seems that NKS researchers are not excited for this reason. Or am I mistaken? That is, NKS does not purport to rearrange the search space of complex systems in a way that makes it more amenable to search. Rather, what you're saying is that there are simply some cool rules out there and you just need to basically exhaustively or randomly try out candidates; in your words, "To be surprised, you have to be willing to actually look where you don't know what to expect."
I'm just having trouble understanding this philosophy. Do you recognize that what you are dealing with is searching through a particular representational space? What makes such a search exciting other than the structure of such a space being correlated? Yet I gather that you are not claiming there is anything particularly good about the structure of the space, but rather that the good thing is simply that "remarkable" things exist in it.
There are remarkable things in virtually any search space. For example, if I assemble a bunch of random neurons, there are going to be some neural networks that pop out that are actually useful. However, the chances of finding on of those is miniscule if I just search blindly. So you don't see scientists jumping for joy that an artificial neural network exists out there with the same functionality of the human brain just because "all we have to do is look where we don't know what to expect." The whole problem is that we _don't_ know where to look. It's not an exciting situation when you have a totally blind search process and you're just excited about flipping coins until the end of the universe. There needs to be something about the search process itself or the structure of the representational space that gives us reason to be optimistic.
And I don't hear that from you, which is what's confusing me. You seem to be saying just that we should be excited simply because some CA somewhere out there might be useful for something? The statement, "To be surprised, you have to be willing to actually look where you don't know what to expect," sounds naive from a computational perspective. It sounds more like what you'd expect a lottery-player to say while buying a ticket at the drug store. Sure, in the world of million-dollar winnings, why not buy a few tickets and take a risk, but in the world of science and engineering you need to follow principles to get to anything interesting, and your approach seems to be nothing more than essentially saying "we don't need principles." What else is there to the theory that makes us confident we will find something useful?
I guess my underlying question is whether you are aware that the relevant field to your investigations is computational search? And if so, what contribution does your theory make to the field of search that makes it any different from random coin flipping, also known as "exhaustive search?"
The beauty of using simpler rules is the spaces can be explored by exhaustive search. The perhaps surprising discovery is that set ups so simple they can be explored exhaustively, already give useful results. A space of the size "million possible rules" is a typical case where one can just grind out literally all of them. The search method is "start at the low end", in other words, and keep grinding until you hit complexity. You will hit it before you run out of computational resources. Part of the point of NKS is that you don't need to go very far to hit complex behavior and that once you have arrived at it, going further will often not be necessary, or will not add anything essential.
For instance and to give a sense of the scale, Wolfram verified by exhaustive search that the 3 million simplest Turing machines can give behavior only up to nesting, but not anything more complicated. He found that up one more level, roughly 5 per million gave complicated results. (A later 110 based proof found the simplest known universal TMs). With register machines, after finding nesting in a space of tens of thousands, he verified exhaustively that the first 276 million never gave more the nesting. With an additional allowed instruction, 126 out of 11 billion possible instances show complex behavior beyond nesting. (One can push their enumeration a bit farther exhaustively because the memory requirements for system state are so modest).
Incidentally, a random instance and exhaustive search are not the same thing. With exhaustive search you are not guessing or sampling, you just do literally all of them.
For middling levels, a bit more complicated than the exhaustively searchable ones, you can use representative sampling, exploiting the fact that complex rules are an appreciable fraction of the space.
If you know your target system has to be in a form well beyond the complexity threshold, but are also aware of where that threshold is and where instances across that threshold appear, you can generate more involved instances from known simpler ones, by starting from known complexity-producing rules, and adding cases to them (extending rule tables, etc) to cover additional features. Naturally you get a bunch that fit that way (each added bit can take any of a range of values), but a much smaller bunch than the whole space. And you can then employ exhaustive or sampling search in that subspace.
There are other issues in effective enumerations. Avoiding redundancies and the like.
I would argue that NKS is not ALL about "computational search". To me, that part has already been done.
NKS and "the book" exists because of the fact that Wolfram wanted to know where complexity begins to show up in systems, and so he has already performed countless enumerations on many systems for us to find out exactly where this "complexity threshold" lies. And from this we know that it is just a few steps away from the simplest of system setups.
So this tells me that if I really wanted to *blindly* find say, a simple random number generator, then I should really start my search from the bottom-up: simplest systems with small enumerations first, then onward. Not the other way around. In this fashion, I know that I may find what I am blindly looking for sooner than later, even perhaps in one of the easily searchable 256 two-color nearest-neighbour CAs.
But uninformed exhaustive searching is not always the name of the game. For although "pure" NKS may include the use of blind and/or exhaustive searching in some cases, for "applied" NKS, in which we use simple systems to model some other system we are interested in studying, we are now directed in our search by the "idealized" details of the system we wish to emulate.
For example, in the case of the 2D CA "snowflake model", you can effectively capture the model through reasoning more than search: "What happens if I simply setup the rule so that a cell becomes black whenever its neighbours are, to reflect that ice forms only where other ice is already? No good, just a faceted form that directly reflects the underlying lattice. OK, then. What now? I am sure I have the basic structure...What about the effects of the release of latent heat?" Done, with minimal search.
From this perspective now, it suggests that mining the "computational universe" is not solely about running some random rule plucked arbitrarily from perhaps an intractable space of possible rules to see if it will generate something "useful" or recognizable, like a portrait of Leonardo da Vinci. Really, it will only give us further evidence of the amazing fact that there only exists 4 fundamental classes of behavior, and their mixtures.
Rather, NKS tells you where to start your search, in which direction you should go, and how you should proceed. In other words, it gives one great confidence to start from the simplest of system setups, to add into the model only what you already know about it, then run, observe, and tweak.
I will end with this quote that I cannot remember where I got it:
"According to NKS, when developing a class of models to search, it is important
to build in as little structure as possible, instead of trying to build in the
known behavior a priori. That behavior may possibly emerge in unexpected ways
from a simpler structure containing fewer assumptions, and will be more likely
to reproduce aspects of the behavior that are not yet known or designed for."
Jason and Mark, thank you for clarifying the NKS perspective. I think I'm becoming more clear on it. Your remarks really have been illuminating. I hope you won't mind my pressing a little more because I really do feel I'm getting closer to the core of what's exciting here, but I don't see it yet.
What I'm hearing is that NKS is really not about search. Rather, it's about how complexity is just found all over the place, so you hardly really have to search to find something interesting. I say "hardly" to mean that you could even do a random search or an exhaustive search and still often get something good. Or, as Mark points out, you can use your own knowledge of a domain to steer you in the right direction without the need for search.
That all sounds nice, but there's still one piece missing: Finding something complex isn't useful unless it's the _particular_ complex system you need. That's what's causing me to still not be able to appreciate what's exciting about NKS.
If I need a complex system to model the weather, or the human brain, or a mechanical engine, or any particular target domain, I can't just take any arbitrary complex process and force it to be a model of the phenomenon of interest. Perhaps a snowflake can be modeled through brute force but that's because snowflakes ARE relatively simple. A human brain on the other hand, is not simple at all.
So let's say I find rule 30 and I want to have a simulated brain in my computer. What then? Have I made any progress whatsoever?
In general finding complexity alone is not the primary problem in modeling and simulation but rather finding the RIGHT complex system. There are of course many more than trillions of complex systems out there. Simply finding one is of no use if what you want is another one.
It also doesn't particularly help that some small sets of rules produce complex systems if they aren't the right ones. In fact, it seems almost anathema to finding the right ones since there are not enough bits in the ruleset to make any principled changes to "adjust" the complex system in the direction you want. But here we are again talking about search again.
But that's the problem- it still looks like to do anything useful will require search. The snowflake example isn't really representative of anything interesting. Scientists would easily have been able to deduce a model of snowflake based on all the principles Mark pointed out with or without NKS. I don't think anyone would be surprised that simple rules lead to pretty looking fractal-like snowflakes, and since we already pretty much know what rules to model, who needs any of this NKS stuff anyway in that situation? The exciting stuff is supposed to be finding stuff that we DON'T already know.
The real issue is how will I get truly complex models of real things that matter. For example, Wolfram talks about the possibility there is a CA (or some simple system) that produces the universe. Can you explain how we would find it without searching? And if it would require searching, then why are we excited about NKS when it gives us no tools to enhance the search?
NKS also seems to hinge on the idea that the size of the ruleset being small somehow helps. But even in a set of only 1,000,000 candidates (which would be extremely small for any serious problem domain), you can't just "enumerate" all possibilities for a domain like brain modeling. Think about it: each candidate brain will take days, months, or years to check. The expense is in the RUNNING of the candidate, not in the rule set itself. Don't you see, if each of my 1,000,000 candidates takes 1 month to test then an exhaustive search takes 1,000,000 months regardless of the ruleset dimensionality! The only domains you can exhaustively search or randomly sample are toy domains in which testing a candidate solution happens in a few seconds or less.
Part of me has a vague notion that you are implying that one single complex system is as good as any other for modeling anything. But that can't possibly be the case can it? You do agree that you have to find the right complex system in order to be useful right? I'm just trying to see where we are having a divergence. It seems there may be some premise I'm missing.
All that I will add to this is that NKS is NOT a *free ride*. Your posts suggest to me that this is what you think it is, or what it purports to be. NKS is primarily about running computer experiments on simple systems to answer questions raised by related hypotheses one may have. And since the science is still in its infancy, there are many basic questions that need to be answered first before more major results can be obtained. And yes, this, like ALL things, will require some searching.
So you can't now just search for and run some setup of a simple system such as a CA and expect results like "Hey, I just found a CA rule that emulates the visual cortex!". First off, how would you know what you were observing without first having some reason to look for it? On what other formal results is this conclusion based? Second, the funny thing is that NKS suggests that yes, this system you just ran may in fact model all or part of the visual cortex, or even say the known Universe. But without guidance first from a set of hypotheses, based on other formal results from studying simple systems or other sciences, what conclusions could you draw from your observations, and why would you pursue this system further?
Of course, as you noted, one way to check is to run the candidate system for billions of years to see what behaviour emerges, though you will never get past your first attempt. So forget that idea.
But apparently, this is not stopping Wolfram from searching for a simple system that can "produce" our Universe. Now ask yourself why. Does he lack common sense and good judgement? Most likely not. He most likely searches because using NKS principles along with results from the relatively mature field of Physics, he has formulated the problem such that the search becomes alot more feasible and tractable. Otherwise, why bother?
You are all asking good questions here, and making good points. Like Mark said, we are going to have to see how useful NKS will turn out to be. It hasn't had the same amount of time and effort put into it as the centuries of OKS, or even the fifty years or so of CS.
At the moment, the known practical uses are fairly limited. The uses in scientific modeling are greater in number, which suggests that practical, technological uses are out there to be discovered.
One small comment about the "mining the computational universe" concept. It is not just about searching, as in the analogy, mining is not just about finding a good spot to build a mine. I prefer the analogy to searching the natural world for useful things, like a rainforest for special fruits or medicines.
Every interesting simple rule has a bunch of details, and quirks, which have yet to be discovered and understood. Just like those plants in the rainforest, it is nontrivial to immediately find their best uses. I expect those quirks can be useful.
Instead of walking around a forest, looking for a nice steel girder, one makes do with what one finds.
Mark and Todd, thanks, I think I'm seeing the side of it that I was missing: You guys aren't really excited about the possibility of finding something that you're looking for so much as just looking at random complex things and finding something surprising in them. Let me know if this is on the right track.
That idea would be consistent though with the idea that complexity is everywhere, in many places we wouldn't have expected to find it so easily. So now you can go look at those places and "mine" them for insight. In this case, "mine" means something different than search- it's mine in the sense of already having struck gold and now starting to dig more and more.
Ok, so I get that. But now I'm naturally curious how something like that would actually work? I've seen rule 30, for example, so now what is the procedure for "mining" it? How do I extract the "hidden medicine" from an arbitrary complex distribution? Is it even realistic to expect to do that? I mean if you for example found a system that happens to be the same as the nuclear reactions in our sun, how would you possibly determine such a thing without knowing a priori what you're looking for? Unfortunately, it sounds like search again, but this time in the reverse: searching for a problem to match a random solution. Is this really realistic? Do you have any anecdotal evidence or example of doing something like that?
I feel it's fair to say that now having fairly well established that NKS offers basically nothing in the way of search, it's a bit of a deflation for the idea's importance. NKS really is more or less the equivalent of Todd's wandering around the rainforest hoping a random berry will be useful. I mean, that's not crazy or anything, but it's not all that exciting compared to a real scientific discovery like the theory of relativity. It's not really scientific either. It's not what you'd call a "principled" approach, while most scientific theories delineate principles.
Also, can you see why it smacks of arrogance that you keep saying it's a "new science?" There is no reason to say that. It's really just a narrow and unproven direction of computer science and information theory. I don't think Einstein had to make claims BEFORE his theory was fleshed out about how it would change the world, "trust me, I'm Einstein." Or, "it's a new science give it time." What scientist gets away with that kind of public pronouncement? Scientists keep their mouths shut until they have a concrete contrubition.
By the way, there are lots of promising new directions unrelated to NKS in this area, many aimed at various aspects of complexity. This is just one of the pack and should be labeled as " a new theory" just like anything else in the area. Also, many theories of complexity actually provide a mechanism for improving the search, which seems much more realistic and practical, so maybe those should have taken the moniker "a new kind of science." Then again, I do appreciate that NKS is theoretically interesting whether or not it has any practical merit. So I'd at least admit that it deserves to be a "new area of research."
Though I think a lot of critics out there really wonder: Is NKS more than alchemy? Is looking at a "complex" distribution in order to find the secret of the universe any different from trying to find God in the digits of Pi? I do not wish to sink to such a level of ad hominem attack as to suggest it's alchemy- it remains to be seen. But I know others are thinking that. And I do wonder why you feel confident that it is not?
When there is a particular known effect (typically simple) you are modeling, often you use a simple behavior simple program, not a complicated one. You pick the rule for its ability to mimic the key driver in the modeled system. (Local contagion in a forest fire model, local inhibition in a snowflake model, whatever).
It generally has parameters (for CAs, initial conditions e.g.), and the formal to model translation typically adds another one or two (scale, a coefficient, whatever). Then you can fit by selecting on the rule's parameters without varying the rule.
Once you have such a model, you can refine it if you like. The base rule is known from the previous version. It has instrinsically obvious extensions that keep the old effect you chose it for. You add another color or allow a different range of weights, whatever.
When instead the phenomenon you are interested in shows real complexity, the first question is its basic cause. It is useful to know complexity onset breakpoints in that case. You can pick the overall system type (class of rules, not a rule) to match known microfeatures of the real system (locality, branching, etc). Then you start with the simplest systems in that class that give complex behavior, and compare it to the typical behavior of the real system. If statistical signatures and other such measures do not match well, then add features, increase system resources etc to improve the match.
Relatively simple rules are much more likely to arise in real systems, containing the same formal interrelations among their elements. Overly complicated ones become astronomically less likely each.
The simplest possible diff equ gives exponential behavior. Does starting from something so simple and formal mean it won't ever happen in real systems? Hardly, it means the underlying relation - a rate dependent on a previous quantity - is common enough you will see it all over the place. Once you've seen and understood the purely formal behavior, you scan the world looking for places it occurs, and can now model them.
We see nesting all over the place, and simple programs (particularly substitution systems) give a good explanation why. We see apparent randomness all over the place, even in systems with simple and relatively uniform components, and simple program style intrinsic randomness generation gives a good explanation why.
Just as we study conics, trigs, polynominals, diff equs, or power laws in their own right, we study CAs, substitution systems, networks, etc in their own right. Just as there are theorems about fitting things with continuous functions, but nobody thinks that means everything is a polynominal, there are universality theorems, but nobody thinks that means everything is a Turing machine. Where you see a behavior you've seen before in a sine plot, you don't use a polynomial you use a sine. Where you see a behavior you've seen before in a CA, you don't use either, or a TM, you use a CA.
The real world operates according to simple program logic. Simple program modeling is here to stay, and computer experiment is and will remain its principle method. Modeling remains what it always was, an art informed by a priori formal knowledge and noticing patterns and equivalences. The rest is commentary.
Jason, fair enough, I hear what you're saying. I guess I'd compare it to probability distributions that happen to match natural phenomena, like the Gaussian/normal curve. Someone looks at one and one day a lightbulb goes off in their head and says, "aha I know what this is!" I guess you're hoping stuff like that will happen with these kinds of rule-based systems, and it might. I'm not against exploring a new research area.
Out of curiosity, what do you find "So tiresome?" Is getting critical analysis tiresome for the NKS crew? I've tried to be careful not to make this into a subjective/personal issue, as I have been genuinely interested in the answers to the questions I have. You've all done a good job, honestly, providing answers, and I have truly broadened my perspective. But I am a little surprised you find this type of critical inquiry tiresome. Doesn't science progress through a conversation of unbridled challenges and questions? It sounds like you just want to get back to preaching to the choir. However, most "science" is conducted through peer-evaluation, so that usually "new science" cannot sidestep that process as Wolfram has done. If he'd gone through a peer-review process it would have been a lot worse than the fairly mild conversation we've had here. You should see some of the reviews I get (and to which I have to respond!)
In any case, I appreciate the lucid explanations you've provided so far, and I really feel I know where NKS is coming from now.
So tiresome means "offers basically nothing...not really scientific...smacks of arrogance...a narrow and unproven direction...alchemy".
(Oh and of course my own method is wonderfully promising instead etc).
Which is remarkably science-free drivel.
Ok that's fair, I should have separated two issues I'm really interested in.
The first, which you've largely addressed quote well, is the question of where the meat of theory is and how its practicioners see it being applied. And I appreciate you've gone out of your way to explain it.
The second issue however is how NKS proponents and even employees view the nontraditional route through which this theory as been promoted, and _why_ that route was chosen. I'm honestly just curious about it; I didn't mean to cast aspersions so much as say, "look, here's what the critics are saying, what's your response?"
You didn't really respond at all to those questions, which perhaps is understandable since I mixed them up with my first line of questioning, but I'm still curious about the latter topic. In your collage of quotations from me you conveniently left out context, for example when I said "offers basically nothing," the whole phrase was, "offers basically nothing in the way of search." I was acknowledging that it has other intriguing aspects. "Smacks of arrogance" is just something I've heard critics say over and over again, and I was interested in your response to why you feel your approach to promoting "new science" is not unnecessarily arrogant. "Narrow and unproven direction" seems largely true. I'm not sure why that's a criticism. You've said we "should instead be ready to be surprised," which is in effect the same as saying it's unproven. "Narrow" may have been an unfair knock; I apologize for that. I'm not sure if it's narrow. Yet on your part it was unfair to use "alchemy" as an example of my "drivel" when I specifically said, "I do not wish to sink to such a level of ad hominem attack as to suggest it's alchemy." I was just pointing out that many have said that and giving you an opportunity to respond.
I'm genuinely curious about this stuff. You have to understand that as a sociological phenomenon watching you guys and the reactions of the scientific world to you is fascinating from the outside. I keep hearing these things from your critics, and sometimes I am sympathetic, but sometimes I wish I could hear more of the other side of the story. So that's why I'm putting it in front of you- not to be tiresome but to actually hear that other side that I am so intrigued about. Perhaps to you it feels like you are under a tiresome onslaught of drivel, but from the outside it doesn't really look that way. It looks more like you don't want to engage the scientific community in a normal way, and the scientific community is a bit peeved about that, perhaps arrogantly so. (Perhaps Wolfram is a kind of Gallileo of his time, or perhaps he is just the usual crank who thinks he's changing the world; the fact that it is unclear is a wonderful and deep story.) So it's really not drivel to bring up these questions, although I agree it is not itself "scientific." It is more sociological, but the sociology of science is fascinating in itself when a rebel tries to buck the establishment.
For example, why does Wolfram feel it's necessary to sidestep scientific review process? How does it feel to be in some sense demonized for doing so? Do you feel like rebels fighting the establishment, or are you perhaps just unaware how the scientific review process usually works? Is there within your group a view that scientists in general are too dogmatic and entrenched to accept or intelligently criticize something novel? Why not engage scientists at already-existing conferences in related fields instead of trying to start your own parallel conference on this one single research direction? Are you afraid that you will not be accepted at those conferences? Why would you be afraid of that? Is it because scientists are in your view ignorant like the church in Gallileo's time? Do you see the irony of a scientist declaring their own work to be a new kind of science when that is usually the job of the prism of history and not the scientist himself? Why do you think in this case it's ok to do that anyway?
If you think I'm attacking you, you are completely misunderstanding. These are fascinating questions. I'm all for going out in unpopular directions; I'm all for spending millions of dollars of your own money to fund work that no one else is willing to do that can potentially change the world. That is the spirit of innovation. Going your own way is cool stuff, sometimes it's the way the world changes, sometimes it's no more than a shot of passion for people looking for a cause. But even if it's only that, that's still a contribution to people's lives and aspirations.
So I have no problem with what you're doing. I just want to know your _side_, your perspective, how you view the critics. I hope you are not simply dismissive of the establishment so quickly that your only comment is to say it's drivel. It's deeper than that, let's face it. But that doesn't mean they're right. It just means it's worth addressing.
It might not be so easy to explain the answers to your questions in one forum post, or even one forum thread.
There are actually lots of opinions, and you can find many of the answers to these questions in various parts of this forum. Many are answered in the NKS book itself.
At the upcoming Wolfram science conference in DC, I along with a few others are giving a minicourse, where we will give a concise one day course on NKS. I've got some materials where I show some examples of how one applies NKS to practical and to scientific questions.
There are going to be many interesting talks at this conference, reflecting many of the pure abstract as well as the applied research going on.
Questions with answers
I think you are asking some great questions here.
... why does Wolfram feel it's necessary to sidestep scientific review process?
Wolfram discusses this question on the first page of text of the book, in the Preface on page ix.
Why not engage scientists at already-existing conferences in related fields instead of trying to start your own parallel conference on this one single research direction?
The name of the book states that this is a new science. Certainly as far as I know, there are no other existing fields or conferences dedicated to exploring and finding useful things in the computational universe. It makes sense to dedicate a conference to NKS, because that need isn't being addressed anywhere else.
Why not engage scientists at already-existing conferences in related fields ... assumes that isn't actually happening. There are a growing number of papers related to or directly building upon NKS. It seems likely that lots of those authors are presenting at conferences, and judging from the range of topics those papers address, in lots of different existing fields as well.
Other questions you are asking seem almost rhetorical to me:
How does it feel to be in some sense demonized for doing so? Do you feel like rebels fighting the establishment, or are you perhaps just unaware how the scientific review process usually works? Is there within your group a view that scientists in general are too dogmatic and entrenched to accept or intelligently criticize something novel?
It is really difficult to answer questions like this in a concrete way. For example:
How does it feel to be in some sense demonized for doing so?This is a personal question, and the answer depends on who you're talking to (and whether or not they even agree that the term "demonized" is appropriate).
Paul, thanks for the insights. Thanks to everyone else too for trying to answer my questions. I learned a lot here about this theory and what it's really about.
Show all 19 posts from this thread on one page
Powered by: vBulletin Version 2.3.0
Copyright © Jelsoft Enterprises Limited 2000 - 2002.