Jason Cawley
Wolfram Science Group
Phoenix, AZ USA
Registered: Aug 2003
Posts: 712 
AIT is fine at characterizing complexity for less than universal systems. Things on the nonrandom side of AIT's divide are just also more complicated than one might expect.
If you fix a family of systems  say CAs, TM would also do if you like  you can ask for the smallest that produces a given sequence, and you will get a meaningful progression as you go from constant to period 1 to period 2 simplicities and the like. The NKS book shows such an analysis on page 1186. But fixing the system is the essential step in this case. Again, the issue is the general computational irreducibility of trying to go backwards from any behavior  string in your terms  to a simple universal system that creates it, without any underlying system fixed. Wolfram discusses this in the note on page 1067. The most relevant section reads 
"even though one knows that almost all long sequences must be algorithmically random, it turns out to be undecidable in general whether any particular sequence is algorithmically random. For in general one can give no upper limit to how much computational effort one might have to expend in order to find out whether any given short programafter any number of stepswill generate the sequence one wants."
There is no reason to privilege any one universal system as a supposed benchmark. And one does not know whether string X appears early or late in the enumeration from short to long initials in system A. Since the test is, is the initial less than the length of X, this sort of matters. Differing by a constant is still differing. When one wants to know how a formal problem scales with number of elements, say, that is not important.
But when the question is, is there any (universal) system and initial for it shorter than X that produces X, the shorter than X part is sensitive to differences of a constant, and the answer thus turns on the (universal) system that produces X most readily. It is in general formally undecidable whether a systeminitial pair with initials less than length X produces X. One can't simply plow through all the cases because the system is not fixed, and there are a countable infinity of them. In addition, you have the running time difficulty Wolfram refers to above. (The 4th initial condition produces the string after running for thirty billion eons).
Suppose I make the following test. I give you 100 strings each 1000 bits long. Some of them I will get from decoded artificial sources, say from ordinary language texts, musical scores, and the like. Each encoded to 01 bit streams in some perhaps different way from the previous. Others I will generate by a variety of universal systems  perhaps including simple transforms of them, like fixed subsets of their steps or locations etc. A wide variety of them, but each from an initial less than 1000 bits long in its own native formalism. You might get every third step in a stripe at position 351 from steps 2431 through 5431 of a range 2 3 color CA from a 872 bit initial condition, with 2s treated as 0s. And another few score like that but each different, but in every case from an initial less than 1000 bits long.
Do you claim you possess any systematic procedure that will identify the "artificial", "designed" strings (the texts and musical scores  in principle those might also be generated by finite grammars but let them stand for "designed") and the algorithmically generated ones, such that the first are "random" and the second are "simple"? There is no such procedure. The strings you'd like to call "simple" are as complicated as anything in our universe, for all you can tell. Possessing a universal machine will not help you. Testing its behavior from its simplest 1000 bits of initials will not help you, even if you could, which you can't. If you knew the target system then yes you could program your universal machine to evaluate it. You'd still have 1000 bits of initials to plow through which you'd never finish, but you could start. Some of those evolutions you might leave running forever without knowing whether this one will "hit" sometime later. But you don't know the system, and detecting the simplest one  the universal system that in fact can make it from an initial of 872 bits in 5341 steps  is formally undecidable.
Running forward knowing the rule governing the actual dynamics is more than log faster than trying to infer back from string to initial. When the system is fixed you can try its initials in order, from the first to the 2^1000th, if your computer lives that long. When it isn't, you simply can't solve it. If you therefore confidently conclude, my computer in the (say) 2^50 initials I was able to run through before my memory ran out or it died, did not produce this string, and my computer is universal, and differs from each given other by at most a constant, ergo it cannot have been produced by a simple system less than 1000 bits  then I simply pull my hand away and show you the CA evolution, which my universal computer was able to evaluate in "laptop time".
You can call the program simple because its rule is almost trivial (just quite specific within a very large space of trivials) and its initial is shorter than the test string. Where did I get my initial 872 bits from though, aren't they artificial? No, I got them from rule 30 via Random in Mathematica. My rule number too. I can easily stay under the size limit with several such generators "above" the main system. Can one say such behavior could not arise randomly, then? Well, it did.
Now pick any natural system whose supposed artificiality is to follow from its SC, and place a 0,1 signal from it in the batch. Distinguish it from any of the others, please.
The argument from SC was eminently more sensible than that, and did not make such claims. It claimed something much easier to test  that a sequence satisfy some elaborate constraint or exact formal property, while not being purely random in a naive entropymeasure sense, not an AIT sense. This is actually testable, and it is false. I gave some examples, they are all over. Sierpinski gaskets are SC, satisfy an elaborate constraint, and are not pure entropy. But they can be generated without anyone having designed the generator, or asked rule 90 to have that property. The 90th 2 color range 1 CA is very early in an enumeration of CAs, much smaller than the full sequence specifying the gasket.
On QC, multiway speed up is quite likely to be possible, and it is what I'd expect from what NKS suggests. But multiway speedup is not countable speed up, let alone continuum infinity (CI) speed up. QC proponents promise lots of things, let's see what they actually deliver. If they did deliver identifiably CI speed up that would be strong evidence against any finite nature position, certainly. It would support QC computational universe types like David Deutsch (it from qubit, rather than it from bit, runs the shorthand). He is also a manyworld Wheelerstyle QMer, though, which I for one am not.
Then there is the prevalent idea that anything polynomial is easy and doable and anything exponential is impossibly hard. Well, that depends on the powers and coefficients of the polynomial, doesn't it? If the first term is to the hundred billioneth power times googleplex factorial, it being a polynomial will be small comfort. It is a practical engineer's rule of thumb, based on extremely small expected problem sizes. There is no assurance a discrete generator for fundamental physics, if there is one and we manage to find it, won't look like the nasty "polynomial" above.
As for your last paragraph about an imagined argument, it certainly isn't mine, so I wouldn't know where to begin to comment. I don't construct arguments like that.
Getting back to potentially nasty polynomials and their connection to the intractable backward inference problem, in a way they are connected. One of the main points of NKS is that lots of things various past measures have considered easy are not easy at all, and lots of entirely finite things with entirely simple generators are on the output side as complicated as anything anyone will ever solve. You know, in theoretical game theory one occasionally sees "theorems" like, if it is finite it is solved, because you just tree out all the possible games and then prune from the endstates. Well, no. Try playing Go that way. Tying it back to the main topic, saying "cannot" about universal systems, even entirely finite ones with entirely realizable (forward!) computational resources, is almost always going to be wrong. We can't limit their behavior easily like that, it is too rich even as entirely finite and supposedly AIT "simple", etc.
On Kolmogorov, of course he did nice work, and I learned math from his texts, among others. I never got to meet him. Chaitin also helped develop AIT. You can read his recent books (e.g. MetaMath) for his take on NKS, which he has followed with interest. I met him at the first NKS conference. Solomonoff was also involved in developing AIT. I met him at the recent NKS Midwest conference in Indiana. He is working on machine learning.
Chaitin's homepage is here, incidentally 
http://www.cs.auckland.ac.nz/CDMTCS/chaitin/)
I hope this helps.
Report this post to a moderator  IP: Logged
