wolframscience.com

A New Kind of Science: The NKS Forum : Powered by vBulletin version 2.3.0 A New Kind of Science: The NKS Forum > NKS Way of Thinking > You cannot classify the complexity of a system which you cannot predict.
  Last Thread   Next Thread
Author
Thread Post New Thread    Post A Reply
inhaesio zha


Registered: Oct 2005
Posts: 403

You cannot classify the complexity of a system which you cannot predict.

You cannot classify the complexity of a system which you cannot predict. Like in CAs: there's no way to know how complex a system is if you can't always say what it's going to do next. All you can do with such systems is know how complex *you* think they are, which is a whole different kettle of fish. It's kindof like intelligence testing: all you can really determine by testing people is which ones produce the same test results, or which ones agree (on the correct answers) with the test-maker. For job performance, or school performance, or life performance, if you *could* predict it, the key quantity would be *what is the person going to do next* [in this job, in this school, in their life]? But testing can't tell you this.

We know from complex systems (chaos, NKS) that it doesn't take much design complexity to produce system behavior that is too complex for even the most complex system to predict by shortcut. To be able to reliably predict the behavior of a system is, in a way (and among other things), to understand the "level of complexity" of the system. But, even if you can see regularities or patterns in a system, if you can't [reliably, simply, quickly] predict it, then you don't understand it well enough even to classify its complexity. You might recognize patterns in its behavior, but there might be things going on in the system that you don't recognize, that are hidden from view right in front of your face.

When I look at what in NKS is classified as a level III system, I might dismiss it and say that there is nothing "intelligent" going on, or that nothing intelligent could go on in a class III system. But this would be a big mistake. I can look at class I and class II NKS systems, and because they are so simple and repetitive that I can form a shortcut-style predictive model in my head, it is reasonable for me to claim to be able to classify their complexity, because no aspect of their behavior defies (and could therefore elude) my understanding, my mental model, of the system. But in a "class III", just as much as in a "class IV", system, the very fact that I cannot predict what's going to happen next (without running the system) means that I *cannot even classify the complexity* of the system. There might be visual similarities in our representations of all class III systems, and then in all class IV systems, in NKS, but how can I possibly dismiss the possibility that there is cooperative systematic order, or long-range communication, (or intelligence,) in a class III system? Our dividing line between class III and class IV is based on *what we can see*. But the fact that even class III systems are complex enough to evade prediction means that they are complex enough that we have to admit that there are things going on there that we cannot see. Which means, frankly, that even behavior that looks "random" or randomized, or simply textural, could contain [intelligent] computation that we just don't recognize. That is to say that those systems could be doing computations that are meaningful from a certain point of view, one that is at least as deep and complex as ours, but that we just don't happen to share. Think of it this way: in CA systems whose cells have three "rows" of awareness/memory, like the {water} systems I've looked at some, the output of most systems looks like television snow...there could be something going on, but it's not easy to spot. That TV snow is random-looking in a way similar to class III systems. But even though class III systems *look* random, it may be that in the interaction of those many "triangles" or other repeating forms, there is systematic long-range communication taking place between various parts of the system *that we just can't see*. Just like with systems that look like TV snow, if our way of observing the system was different, it might not look random at all.

In intelligence testing, performance and job testing for humans, this signals the need for a shift for some analysts: there's a very concrete sense in which the case of a subject doing things that the tester cannot predict means (and only means) that the tester doesn't completely understand the subject. Somewhat akin to how it's sometimes possible to show how two parts of our chaotic world are connected, but impossible to ever say that any two parts are not connected, in testing, it's sometimes possible to say that you understand a subject, but that's the minority case. Whenever a test taker does something the test maker cannot predict or explain (and that includes "wrong" answers), the only safe conclusion the test maker can assert is the quite humble one that he does not understand the subject. I think it's clear in these situations on which side of the court the limitation lies.

Last edited by inhaesio zha on 12-31-2006 at 07:19 AM

Report this post to a moderator | IP: Logged

Old Post 12-31-2006 07:05 AM
inhaesio zha is offline Click Here to See the Profile for inhaesio zha Click here to Send inhaesio zha a Private Message Visit inhaesio zha's homepage! Edit/Delete Message Reply w/Quote
inhaesio zha


Registered: Oct 2005
Posts: 403

meaning is resonance

What is the meaning of meaning? What does it mean to mean? I think meaning is resonance. Just like the air in the cavity of a drum is resonant with respect to the vibration of the drum head, just like a story that bears similarity to the nature of my life is meaningful to me, a system is meaningful to me if it resonates with me. And resonance has everything to do with the [shape of the] beholder.

Let's say I'm looking at the output of a good pseudo-random sequence generator. If I am looking at simple statistical frequency of the digits produced, the sequence will appear random, meaningless. But if my way of looking at the thing was different--let's say I had in my mind the function(s) used to generate the sequence--then the sequence would resonate with me, it would seem meaningful.

NASCAR fans like racing movies more than I do. Can the seeming randomness of class III systems be (although an extreme example) analogous to this in some cases? Maybe there are systems which, because of our methods of perception (visual and otherwise) seem random or meaningless to us, but which, if we perceived them in a different way, would resonate.

Couldn't I hide a message in a sequence of (at first look) seemingly random digits, steganography-style, by using every hundredth digit to store the message, filling in the other digits so as to make the frequencies of the digits come out [pretty much] evenly? If I could do that, then couldn't class III systems, in a similar way, be doing something decidedly non-random while appearing otherwise?

Last edited by inhaesio zha on 01-02-2007 at 06:37 PM

Report this post to a moderator | IP: Logged

Old Post 12-31-2006 08:25 PM
inhaesio zha is offline Click Here to See the Profile for inhaesio zha Click here to Send inhaesio zha a Private Message Visit inhaesio zha's homepage! Edit/Delete Message Reply w/Quote
Iconasostacles


Registered: Jul 2007
Posts: 6

Fairly obviously, the nature of the experiencer of phenomena bears stipulation when making observations. I am reminded of the "integral math" of Ken Wilber which attempt to alegbraically identify realms of possible investigation by coding the types of experiences, experiences and styles of viewing. So we may wish to put "as far as X can tell" around any given class of observations.

So are we left with just 'infinitely complex as far as we can tell"? Definitely. Yet we might consider it correct until we compute our current limits. But could we out-compute what seems transcendentally complex to a human consciousness? Not directly. However, we may be able to take the word of a system that is more complex than us and provides testable results in domains that we can comprehend. I am reminded of how hexidecimal is sometimes used to modulate between human base-10 math and digital binary math. Systems too complex for us may provide reasonably testable results that allow us to "take its word" about the infinite or non-infinite nature of patterns that we cannot handle. This is not an absolute measure but it might be an improvement.

The possibility of "go-between" patterns is the promise of useful commonality -- which ties into the last post about 'meaning as resonance.'

We cannot know the source of all patterns such that the ultimate nature of any pattern can be confirmed of denied. However we can increase our capacity for resonance with more complex patterns through the usage of other patterns that meet or exceed our level. Some of these might tell us that, as far as they are concerned, many patterns reach infinite complexity. They could be wrong but they will be more correct than we are.

Report this post to a moderator | IP: Logged

Old Post 07-30-2007 08:11 PM
Iconasostacles is offline Click Here to See the Profile for Iconasostacles Edit/Delete Message Reply w/Quote
inhaesio zha


Registered: Oct 2005
Posts: 403

ok, let's try this

If you cannot classify the complexity of a system that you cannot predict, then, isn't it the case that you cannot classify your own complexity, since (based on Stephen Hawking-type ideas about predictive models needing to contain as many data elements as the thing they're predicting in order to be accurate) you can only contain as many [predictive] elements as you can contain [actual self] elements...so...you cannot shortcut-predict your own future actions...if you cannot know what you yourself will do next, then isn't it the case that you cannot classify your own complexity...i.e. that you cannot know how smart you are, that you cannot know where you stand with respect to other intelligent beings? Maybe I'm stretching this too far, but: might it be the case that it is fundamentally impossible for an individual to know how smart they are with respect to others...such that, essentially, when I converse with you, there is no general, absolute guarantee, that I can know which one of us knows more, or knows better, about some subject or fact? I may simply be predisposed (for whatever personal, historical reason) to think that discussions can continue forever without conclusion, that while discussion is possible, conclusion is not always possible, but...that is how it seems to me. That perhaps I cannot predict my own actions, and therefore I cannot understand my own complexity, and furthermore I cannot compare my complexity with the complexity of others...maybe, in this sense, in a conversation, it is fundamentally impossible to be sure who is right? Or who "knows more"? Maybe it's impossible to tell.

Report this post to a moderator | IP: Logged

Old Post 08-17-2007 05:29 AM
inhaesio zha is offline Click Here to See the Profile for inhaesio zha Click here to Send inhaesio zha a Private Message Visit inhaesio zha's homepage! Edit/Delete Message Reply w/Quote
Jason Cawley
Wolfram Science Group
Phoenix, AZ USA

Registered: Aug 2003
Posts: 712

I usually avoid commenting on discussions like this, that seem to me to miss basic points at the outset of the whole subject. Frankly, it is usually plowing the sea. But I'll give it a try on the last and see if any of it sticks.

"If you cannot classify the complexity of a system that you cannot predict"

But I can.

I can classify anything I please, cutting it up in as many ways as I like, with sets of distinctions that are closer to or farther from the subject's "natural" elements or substructures. As for prediction, I can predict many things in gross and fewer in detail. We can say that many things are not exhaustively predictable. And the relative importance of the predictable and the unpredictable bits varies from system to system and from purpose to purpose.

What is classification? Statically, it is just distinction or measurement. I can take a random walk and measure where it is at each step. I can classify the levels reached. I can classify separate runs of the same system. Etc. I can also predict some gross features, like the way variance over a sample grows with time, say. Without it making the details of the next run any more predictable. Classification does not depend in any essential way on prediction in detail.

The most one could say is that one might imagine one possible approach to classification for systems that can be predicted in detail, which starts with a full, correct, detailed description of the future trajectory, and then uses that to answer sets of simpler questions about the system (at future points or not, averages or compressions or not, etc). But nothing forces this procedure upon me. It is purely voluntary.

It recommends itself if the full detailed description is quite easy and the practical questions I'd like to ask about the results large or widely varied (so I save time and effort if I keep the intermediate result, predicted system behavior, instead of redoing it for every question asked about the behavior). This is one formal analysis option in a simple special case and there is nothing exhaustive about it.

"isn't it the case that you cannot classify your own complexity"

No, I can. See above on what classification is. I can classify all systems into those less than, equally, or more complex than a times table up to the number 10 - if I want. I am more complicated than that object. I just classified my own complexity. "But your classification admits further possible refinement". Yeah. All do. So what?

A better objection would be, your classification lumps together so many things in the "more complicated" bin that it hardly seems useful for practical purposes. But that depends on those purposes. If I am considering curricula for preschoolers, or the typical operations performed on a certain sort of abacus, it might be a relevant classification.

If instead I classify levels of complexity into "computationally reducible, or computationally irreducible", and then again into "computationally universal, or not", then there are other obvious issues for which this is a useful classification. For example, NKS. It also raises questions - do these two distinctions typically coincide, or not? Where are the edges between them, and what specific formal instances or behaviors occur where they do not match? How common are they in the space of formal systems generally, or those with this many elements or internal relations? Etc. More on their content below.

"ideas about predictive models needing to contain as many data elements as the thing they're predicting in order to be accurate"

Sorry, an unsound idea, or one misapplied. Predictive models do not need to contain as many data elements as the thing they are predicting in order to be accurate. There are all kinds of simplifying substitutions and shortcuts in formal and real behaviors. Even for every single detail. I have a really accurate description of the future value of every single cell of rule 0 after the initial condition for every initial condition regardless of size or number of steps, using just one element. When the behavior is simple it can be fully predicted without "one to one and onto" modeling.

Second, limited aspects of the behavior of a system can be exactly as simple as the previous, and therefore just as predictable, even if other aspects of the behavior are not. I can tell you the exact number of cells in a cellular automaton evolution with wrapped boundary of width 100 over n steps, without using that many elements, or even caring what rule it is.

And this is not an inaccurate prediction. It is not like estimating the probable number of black cells in a rule 30 pattern of the same size from some given initial condition. That I could still tell you within limits and with an expectation without doing the calculation one to one and onto - another form of prediction - but the exact number would be within my error bar (at least for a confidence-interval portion of the initial conditions), only. I can also get the exact number, but I need to perform a calculation using as many elements and steps as the system, to do so.

Notice, however, that this is not a feature of prediction of anything, as such. It is specific to - a complicated behavior, a detailed prediction, and an exact result. It requires all of the following - that the target aimed at be narrow rather than broad, and that the behavior getting there be computationally irreducible, and that "close doesn't count", only the exact result.

The first and last are true of modeling generally. NKS is explicitly about focusing on the middle one and getting a handle on it, formally. It emphatically does not always apply. That is why the distinction between computationally irreducible and computationally reducible behaviors, is meaningful and important.

"you can only contain as many [predictive] elements as you can contain [actual self] elements"

Actually, I contain less, and so does every other real instantiation of any system capable of prediction of any kind. Far from this meaning that prediction never occurs or is phenomenally impossible, this relation demonstrates that prediction in general does not depend on one to one and onto equal time simulation. Prediction is phenomenal. Any theory that pretends it is impossible is falsified before it starts.

And from the previous, it is very easy to see how. There are vastly superior formal methods available. However, there are also specific hard questions about specific complicated systems, where the best one can do is to model aspects of the behavior one to one and onto. Aspects, though. One routinely relies on the other bits (simplicities on other levels, answerable questions, etc) for an empirical "remainder".

"you cannot shortcut-predict your own future actions"

It is no harder in principle that predicting any other system of comparable complexity. I might be in the process of doing something very simple, and so be readily able to predict what I am about to do.

You are instead slipping the problem. Even if we generalize to prediction of all future behaviors of a universal system, if the initial is given then that can often be done. Because "universal" refers to behaviors possible over the ensemble of initials, and some of the behaviors of a universal system are computationally reducible. The self part adds nothing, it is specifically irreducible behaviors that are not predictable in detail beforehand. (Even so, obviously anything instantiated also has lots of predictable aspects of its behavior etc).

As for trying to map complexity classifications into gradations of "intelligence", it seems fundamentally confused to me. It suffices to point out on the one hand that classes are thresholds (or "large discrete bins", not real numbers) and on the other that intelligence is not prediction of details.

Obviously it is a more flexible capacity, refers to ensembles of conditions, is a form of potential, includes an idea of adaptation and discrimination of relevant from irrelevant facts, has functional or value-scale components, etc. A cyberneticist would say, intelligent means something like able to change its behavior to get what is actually good for it. But without being "wedded" to that tradition, it is clearly disjoint from "can predict where every gluon will be at all future times", a set that is empty.

Report this post to a moderator | IP: Logged

Old Post 08-17-2007 04:57 PM
Jason Cawley is offline Click Here to See the Profile for Jason Cawley Click here to Send Jason Cawley a Private Message Edit/Delete Message Reply w/Quote
James Jones Rounds
Penn State University; NanoHorizons, Inc.
State College, PA

Registered: Jun 2007
Posts: 1

Complexity impossible to classify, unless you can classify awareness...

While I agree that classification is always possible, there remains inhaesio's initial query as to how *accurate* such a classification is. And while, of course, one can not always predict exactly how accurate a classification will be regarding predictions, there are salient variables that point in the right direction. When testing very complex systems, such as human intelligence, one of the most important and superficially easy parameters to assess is how "aware" the subject is of the task at hand and its goal.

While the inability to predict a subject's actions on, for example, an intelligence test, only really signifies the tester's incomplete understanding of the subject, the subject's assessable awareness of the intention of the test gives a clue.

If a subject fails to act reliably on an intelligence test, prompting either an assessment that he/she is not intelligent or that the tester does not really understand the subject, the tester can ask the subject if he/she understands the point of the test. If he/she does not, then the apparent complexity of the subject's performance can be deconstructed regarding his/her dyslexia, or misunderstanding regarding the test, etc.

Similarly, a serious investigation as to the "awareness'' of different interacting forces in a Cellular Automaton rule can lead to improvements in the accuracy of classifications. This proposition for a CA Mechanics could be very fruitful, but assessments of awareness can be tricky to use when their value is positive. An assessment of awareness that has a negative value is a situation in which there is a lack of awareness. In the opposite situation, how does one really successfully classify the aware system? For example, one can imagine the truly torturous process of attempting to classify the intelligence of the most beguiling criminals of all time & fiction, i.e. Hannibal Lecter. Infuriatingly high levels of awareness can easily come across as failure to perform the intelligence tasks at hand.

Similarly, Class III-style randomness can come across as complete self-unawareness within a system or CA. However, who is to say that such persistent, presumably eternal randomness is not in fact indicative of very high levels of internal awareness within the system, and thus representative more of a predilection for avoiding pattern. Why do we presume that "true" Class IV complexity always leads to pattern development? We like patterns, but do all complex systems like patterns?

Therefore I respond to inhaesio's question (how would we know if a pattern was so subtly embedded in randomness as to not impede that apparent randomness) with a question: how do we know that the persistance of randomness is not a signal in itself of high, deliberate complexity?

The assumption we always make is whether or not complexity tips "the" pattern-scale toward pattern formation or away from it. What informs our answer either way?

Report this post to a moderator | IP: Logged

Old Post 12-10-2007 03:28 PM
James Jones Rounds is offline Click Here to See the Profile for James Jones Rounds Visit James Jones Rounds's homepage! Edit/Delete Message Reply w/Quote
inhaesio zha


Registered: Oct 2005
Posts: 403

me: "...ideas about predictive models needing to contain as many data elements as the thing they're predicting in order to be accurate..."

Mr. Cawley: "...Sorry, an unsound idea, or one misapplied. Predictive models do not need to contain as many data elements as the thing they are predicting in order to be accurate. There are all kinds of simplifying substitutions and shortcuts in formal and real behaviors. Even for every single detail. I have a really accurate description of the future value of every single cell of rule 0 after the initial condition for every initial condition regardless of size or number of steps, using just one element. When the behavior is simple it can be fully predicted without 'one to one and onto' modeling..."

I'm going to respond to this, four years later, because I'm poking around this forum again and, frankly, I get a little annoyed when members of the inner NKS clique take a superior tone with me (JC says I "miss basic points at the outset of the whole subject"...he's "plowing the sea" by participating in this thread with me...he'll "give it a try...and see if any of it sticks"...sorry you seem to have such a low opinion of me, JC; I admire your philosophical and analytical perspectives on this site and others). PJL took a similar tone with me in person at the D.C. NKS conference and Cawley does it with me in this thread. You all should be aware, as people who clearly want to promote an NKS slant in the world, that when you approach outsiders like me with that kind of tone, it's a turn-off to your whole group. That said, I am clearly very interested in thinking about these ideas and participating with you within the context of this forum, so I'll move on to the content of my rebuttal to part of what JC writes above:

I may not have been as clear in my reference, in 2006, as I should have been, to the idea I'm talking about it, which I heard via --- I don't know --- some popularized Hawking book. The idea is that to predict an irreducible system (of the type most oft discussed in this domain) that, being there's no shortcut-style, reductive description of the system (unlike there usually is in math and physics---math and physics *are*, essentially, reductive descriptions), that as you build a simulation of this complex system, you end up needing to make your system more and more complex (using more and more "elements"---physical elements, conceptual elements)...and that there's a dynamic that starts to illustrate itself, wherein if you're creating a simulation of what's going to happen next in a complex universe, the more and more accurately you want to do that---in cases where there is no reductive description of the history or unfolded dynamics of the world---you approach a situation wherein it's less and less like a simulation that you can run beforehand, and more and more like an exact copy of the thing you're trying to simulate in the first place...which, when time is part of the universe...means that, less and less, you get the benefit of being able to predict events with your simulation...since the simulation takes as long to run as the universe itself.

In the part of your response that I quoted, you're talking about simple systems, clearly, systems that can be reductively described. In my proposition about classifying one's own complexity, or classifying a system that you cannot predict, clearly I am not talking about that kind of system.

I wasn't as rigorous as I should have been in my original post, perhaps. What I was trying to get at, was that---I'll make a weaker and more articulated assertion here---when one wants to figure out exactly how complex an observed system is, there are limits inherent in that: if you "cannot predict" the system such that you have no exact reductive description of its unfolded dynamics, then there are elements in the unfolded history that, since you can't predict them, you don't understand well enough to eliminate the possibility that they contain complex elements. If you can't predict a system completely, if you can't reduce it completely, then setting an upper bound for its complexity seems to me to be at best a dicey matter! (a functionally-capping upper bound...an upper bound that is lower than the highest upper bound in your classification scheme, Class IV in the case of NKS)

If I'm a teacher and I give you a test, and I have a model that allows me to always guess right before you take the test, about what you will answer on the test, then I can claim to classify your test-taking behavior in a wholly-more-secure way than if I can't predict what you will answer on the test...because in the former case, since your behavior doesn't deviate from my reductive description, it would be significantly harder to say that there's anything in your behavior that's eluding me than if your behavior deviates from my best reductive description (prediction). If you're doing something I don't understand, something I can't predict, then you may very well be doing something that is highly complex, sensible, meaningful, etc., that, if I understood it, or could recognize it, or describe it, might affect my classification of your complexity (upward). I might be filling in ovals on a multiple-choice test to spell out "this class is boring" in a compressed binary format, completely ignoring the questions being asked of me. That's an example of a system whose output (my answers on the test) looks Class III to you, but is really Class IV. So while I obviously recognize that there is a taxonomic difference between Class III and Class IV systems, the example I just gave should be sufficient reason to doubt that, in general, behavior that looks random, cannot contain complex, intelligent, or universal behavior.

Distinct from that question, in my mind, is the question: if I know the rules of the system and its initial state and I see every part of the output of the system from step 0, can an intelligent, non-random system produce behavior that looks random (Class III) from the very beginning. My example of the student differs from this in that I wasn't observing the student from step 0, didn't see its initial state, etc. In that example, perhaps obviously, only part of the output of the system is random. Is there a Class III-looking CA, or some other simple system, that looks random from step 0, but that actually contains nonrandom, meaningful, behavior? I certainly don't know, or else I would post the damn thing here. It is hard for me to imagine something like an ECA that could do this...organize itself through time, having instantly assumed a random-looking output. It seems to me that there might usually be some initialization period during which the thing had to decide to, for example, write compressed, binary-encoded messages in multiple-choice answers on a test. (To be more demanding of the test example, it would have to move in the direction of there being the lookup-table part of a compressed message encoded somewhere in my test...the decision to be cryptic would have to be somewhere, right?, in the rule or in the system output(?)...and then, would it be possible for that decision or nature to itself be so cryptic that it looked random to me...(?)...that, frankly, is hard for me to imagine.) But, myself, I do not see reason enough to cast out the possibility that this could happen, that this kind of system could exist.

For one, and this is quite general, but I think relevant here: the way we're viewing CA output is part of why it seems to have form, or to be random, to us. Even the 2d grid, widely-regarded as simple, probably one of the least-presumptive output visualization mechanisms our species can think of, contains assumptions and mappings that inform our ability to see the behavior of the system. It may be that different visualization or perception mechanisms for CAs (and other systems, obviously), when used, would force, say, the 256 ECAs into different Class I-Class IV categories. Maybe rule 110, when viewed through my network-unrolling methodology, looks like a different class than it does in the 2d-grid perception mechanism.

For another, I happen to have seen, and have posted here, years ago, systems very much like ECAs except with a denser connectivity, if you will---the "water" systems, which are like ECAs except with two rows of memory, while not fulfilling the requirements I've given above of a system that appears random from step 0 while actually containing highly-complex, nonrandom order, look a whole lot more like TV snow, on the whole, than any of the ECAs, while clearly not being purely random in their behavior. That doesn't, of course, mean that there are systems with no detectable initialization period that look completely random and yet contain decidedly nonrandom and meaningful behavior, but to me it's one reason to wonder if perhaps there might be such systems.

I suppose, in a way, that some classic PRNGs are non-CA examples of systems whose output, from step 0, even with visibility into the system rule, does not demonstrate a visible initialization period in which the system is organizing itself into a state where it can slip secret messages past me in the mail, and yet, those systems demonstrate decidedly nonrandom (cyclic) behavior, even while most people's way of perceiving the system makes the system look completely random, through and through. It's not intelligent behavior, as far as I know, so I don't find that example very satisfying, myself.

Is there a system where I can know the rule, see initial state and output from step 0, that looks random from step 0 (when using the 2d grid visualization, by which we'd say it's Class III), yet is meaningfully nonrandom when viewed in a different way? I don't know. I've looked through quite a lot of CA-like systems, programmatically searching for such an example, without finding one.

You're right, Mr. Cawley, you can classify anything you please. =) (I hope you keep doing so.) And I like the way you all classify things, Class I-IV and such. There's still a nagging question, though, in my brain, about whether I can be sure that every Class III system is, in fact, not possibly a universal system that is just hard for me to see. Short a satisfying example, however, I certainly defer to you that what looks unintelligently random, is exactly that.

crosspost on my site: http://matthewvantemple.wordpress.c...me-in-the-mail/

Last edited by inhaesio zha on 03-19-2010 at 07:53 AM

Report this post to a moderator | IP: Logged

Old Post 03-19-2010 06:37 AM
inhaesio zha is offline Click Here to See the Profile for inhaesio zha Click here to Send inhaesio zha a Private Message Visit inhaesio zha's homepage! Edit/Delete Message Reply w/Quote
inhaesio zha


Registered: Oct 2005
Posts: 403

I don't know AIT but it seems like if there was a known rule somewhere that could show how such a steganographic-type [NKS] system could not exist, it might be in AIT. (?)

Report this post to a moderator | IP: Logged

Old Post 03-19-2010 08:10 AM
inhaesio zha is offline Click Here to See the Profile for inhaesio zha Click here to Send inhaesio zha a Private Message Visit inhaesio zha's homepage! Edit/Delete Message Reply w/Quote
Post New Thread    Post A Reply
  Last Thread   Next Thread
Show Printable Version | Email this Page | Subscribe to this Thread


 

wolframscience.com  |  wolfram atlas  |  NKS online  |  Wolfram|Alpha  |  Wolfram Science Summer School  |  web resources  |  contact us

Forum Sponsored by Wolfram Research

© 2004-14 Wolfram Research, Inc. | Powered by vBulletin 2.3.0 © 2000-2002 Jelsoft Enterprises, Ltd. | Disclaimer | Archives