Show all 5 posts from this thread on one page
A New Kind of Science: The NKS Forum (http://forum.wolframscience.com/index.php)
- Pure NKS (http://forum.wolframscience.com/forumdisplay.php?forumid=3)
-- A PDE that actually resembles the complexity of Rule 30. (http://forum.wolframscience.com/showthread.php?threadid=569)
A PDE that actually resembles the complexity of Rule 30.
Short version: see here for a PDE graph that looks surprisingly like a CA such as Rule 30.
Wolfram claims at the end of Chapter 4 of NKS that although the complexity characteristic of CAs is harder to come by in continuous functions, it can still be found. However, the best examples he finds (see page 166) really don't qualitatively resemble the patterns found in, for example, Rule 30. The PDEs he examines exhibit relatively simple patterns, mostly made up of waves that seem to recombine in fairly predictable ways. Compared to something like Rule 30, it seems like a stretch to claim that the "basic phenomenon" is the same. And although one of the primary missions of NKS is, as it says, a "new" science, it still seems like a significant outstanding question to what degree any of these new CA complexities appear in traditional continuous functions.
In any case, it was with this dissatisfaction in mind that I went looking for a more CA-like PDE. Eventually I hit on one very simple PDE that produced this graph, which clearly captures the random CA Rule 30 look much more qualitatively than Wolfram's PDEs. The real question, of course, is why does this one does it when so many don't. Another question is whether it really does it, or is just a computational artifact. I explore the equation, and a few cousins, at www.democraticwriting.com/diffcell, but I don't quite have the math to start figuring out why it behaves as it does, and whether that behavior is fundamentally similar to a CA like Rule 30. I'd appreciate any thoughts or suggestions--or let me know if you've seen this somewhere else before!
those are some interesting critters. did you construct them, find them randomly, or through some systematic search?
One correction. On your web page you say of the pictures found at http://www.wolframscience.com/nksonline/page-925 that they are a "few other pde's that Wolfram said somewhat resemble the most interesting CA behavior"
I think Wolfram is just saying these are a few more examples of the most popular pdes, not that they look like any of the common cellular automata examples.
His general goal,particularly with the last example on http://www.wolframscience.com/nksonline/page-166, is to show that complex behavior happens in PDEs, not that they look like some cellular automata.
Your critters appear to be more like rule 30 or rule 110 because they are non-symmetric, and have a larger number of discrete structures within the area shown.
One thing to realize is that every system has its own character, and so its a bit difficult sometimes to compare the complexity found in one system versus the other except in some very qualitative ways. The only real way i think to judge the overal complexity is if you can look at the overall picture and understand what is going on - if you can predict its future evolution without much difficulty.
Wolfram's examples of complex PDEs have the quality that you cannot easily predit this. His PDEs are aesthetically pleasing and at times show smooth, flowing behavior, but i dont think that lots of sharp transitions are needed for complex behavior.
Everything is an expression.
Different "characters," or different degrees of complexity?
Thanks for taking a look at my "critters." I did indeed have to find them systematically--part of what made the task intriguing was that most randomly chosen PDEs seem to either exhibit predictable behavior, or swiftly develop singularities.
I modeled my PDEs on the most interesting of the basic CA rules: if we take white-black-black to be roughly like the 0->Pi/2 range of Sin(x)--the left shoulder of a hump--and so on for the other asymmetrical CA rules, then I wanted a PDE that, like CA rules 30 or 110, pulled the different parts of a curve in the x-y plane in different directions as t progresses. A PDE where the second time derivative equals the product of the first and second derivatives (in the x-y plane) pulls the left shoulder down (positive slope * negative curvature), the right shoulder up, and so on--in effect chopping up each hump as soon as it's made. Once the tendency to develop singularities was countered by the addition of -y^3, my PDE produced something like the CA rule 30 behavior I sought.
Why do I think this equation matters particularly? I agree with you that "it's difficult to compare the complexity found in one system versus the other," but it seems like Wolfram goes to some lengths in NKS to characterize the important kinds of complexity, and by most of these accounts, his PDEs fall relatively low on the scale. Wolfram ranks the complexity of his CAs by their behavior: simple patterns, fractal patterns, apparently random patterns, and "edge of chaos" patterns as in rule 110. Given that Wolfram draws the threshold for complexity between stages 2 and 3, it seems doubtful to me whether the PDEs on page 925 and 166 surpass the regular or fractal pattern stage, particularly if the bottom equation on 166 owes its most interesting characteristics to computer approximation effects.
The other important requirement for complexity seems to be simple origins. Without that, the system is just complicated. But the PDEs on 925 and 166 seem to exhibit more complexity only as the equations become more complex--particularly if you count the sinewave background for Wolfram's equations, which seems to generate a lot of the cascading effects and combinatorial interference of the various dotted lines. Given that all the PDEs in NKS are symmetrical, low on discrete structures, show little in the way of surprising pattern, and may owe a lot of their complexity to the complicatedness of their generating equations, it seems like they don't really exhibit the shocking exuberance from simple beginnings that draws us all to the discrete CA-type systems.
What I wanted was a PDE that seemed genuinely as simple as the CA rules, but produced behavior that was not arguably, but clearly, as complex as the best CA. Of course, to get anywhere in this argument, we need more quantitative measures of complexity that can compare different systems. Is complexity a threshold, or merely one region of a spectrum? In reading NKS, the PDE section seemed to offer the least payoff for the most complicated input. Why does complexity seem to spring easily from discrete CA-type rules, but requires so much more work in the PDE analog? Is this something about continuous functions in general, or just differential equations? Why does complexity blossom when it does from PDEs--and does it necessarily have a "different character" from that found in discrete CAs? Why do the PDEs in NKS exhibit similar characters that differ from my PDEs--which in turn seems more like the character of a CA like Rule 30? Do CAs help us understand the hodgepodge mathematics of differential equations, or continuous functions more generally? Any thoughts?
I worry a bit about how much of it is rounding related. But the pictures on your page around the line "The following are based on related equations, targeting specific regions of a curve" look pretty convincing, a lot like some kinds of turbulence.
The reason the rounding issue enters is continuous functions are based on idealizations about real numbers that start getting funky when there is any sort of sensitive dependence, on initials or the phase of an interaction between structures or anything similar.
When we use a real number for something we are mathematically specifying more than we generally intend to specify. We want to say something about a relative position, not about the details of an infinite digit sequence treated as an information structure. When the interactions we are tracking are themselves additive and linear, this "overspecifying" aspect of mathematically defined real numbers doesn't crop up. Things that were close stay close, or migrate smoothly, etc.
When instead we are dealing with a process where details of digit sequences do matter, we can see what it going on in a fully discrete set up. But when we see something in a continuous set up we aren't sure if it is from the equation or from the algorithm that makes it discrete to compute things about it.
I find the turbulent pics more convincing than overly spiky ones, because I think it is more likely that phenomenon is really there, arising from overlapping cells of phase interference or something similar, even with smoothly varying values.
You can track the way precision effects propagate in a standard CA, by imagining cell values are known imprecisely, and then look at how the resulting uncertainty spreads through the subsequent evolution. What you find in that case is a period of ordinary looking evolution, then a region in the middle of the pattern that goes complete grey - all cells uncertain - with an inverted V or U shaped boundary between them. One can then do a "certainty transform" on the resulting evolution, that maps any known cell value (whatever it is) to one color and unknown to another color. Looks rather like the inverse of some of your earlier pics - the known region tends to fit where your pics are flat, and the unknown where your pics get spiky. This happens because a large number of cells each with small uncertainty are multiplying out to a large cumulative impact on the cells in a grey V-U region.
One way you might try to distinguish between such uncertainty and the actual DE behavior is by some sort of perturbation analysis. If the value at x1,y1 is not what NDSolve says it is, but is constrained to be .01 higher or lower instead (as a boundary condition), does that change the way anything looks later on? By how much, in what way, etc? Just a suggestion for something to explore.
All that said, I do like some of the fine grained later pics, which look to me quite like turbulent flow.
The tricky part in getting a PDE to perform computations is get one which goes on forever. For instance, one which emulates Rule 30, or any other ECA should be computationally stable. By that I mean, the interpretation of the emulation should be the same for any slight perturbation of the initial conditions, when run for any amount of time.
Otherwise, it would be difficult to establish that the emulation holds for all time, because a slight error in the output after one step could grow into a large perturbation, enough to eventually destroy the emulation.
There are also some issues unique to the PDE setup, such as blow-up. It has been a well-known drawback of PDE modeling. For instance, the number of rabbits might go to infinity in finite time, or perhaps more disturbing is when the number of rabbits is between 0 and 1.
But it is the long-term computational stability which I think is the hardest part. I have to admit that I do not really understand Tokihiro's paper. There seems to be a limit taken, so it is not clear whether he only emulates an ECA for an arbitrary amount of time (but different equations for each amount of time).
It is interesting that the equations he uses admit soliton solutions, which are waves which pass through each other, a somewhat trivial behavior in a CA with memory. Could it be possible that they also admit (nonsoliton) solutions which interact to give other CA's? Could the phases of the soliton waves emulate r=1/2 rules?
Show all 5 posts from this thread on one page
Powered by: vBulletin Version 2.3.0
Copyright © Jelsoft Enterprises Limited 2000 - 2002.