wolframscience.com

A New Kind of Science: The NKS Forum : Powered by vBulletin version 2.3.0 A New Kind of Science: The NKS Forum > NKS Way of Thinking > NKS disproves intelligent design
Pages (3): « 1 2 [3]   Last Thread   Next Thread
Author
Thread Post New Thread    Post A Reply
Vasily Shirin


Registered: Jun 2004
Posts: 78

You cannot argue against mathematical definition. You can complain that "complexity" in Kolmogorov theory doesn't match your intuitive idea of complexity, but this doesn't make any effect on the theory. It's the same as complaining that real numbers used in calculus are not real at all. Or that complex numbers are not really that complex.
However, if some or other term in math invokes offensive emotional associations, it can be always replaced by a more agreeable term to everybody's satisfaction - all the theorems will hold anyway. So, I propose, for the sake of argument, to replace the name
"Kolmogorov complexity theory" by "K-theory", and in every place where this theory uses the term "complexity" - to use "K-value" instead. I.e., we will define K-value of a string as the minimum number of bits into which this string can be compressed ... and so on. Because ID arguments are using complexity in the above sense, we have to also replace "complexity" by K-values there. Therefore, we have a number of texts where the word "complexity" is never used, but K-values are used throughout.
On the other hand, since "complexity" in NKS doesn't mean Kolmogorov complexity, we have to leave this term in NKS intact. Now my question is: how NKS can disprove arguments based on K-values, if the term K-value is never even used in NKS, and the term "complexity", after the above substitution of words is made, is never used in any ID arguments? It seems your argument is based on a terminological confusion, which has something to do with the fact that meaning of "complexity" in NKS is not defined. Kolmogorov studied complexity (OK, K-values) of strings - Wolfram talks about "complex behaviour", which, in NKS, doesn't have a numeric measure at all. Kolmogorov never claimed that long programs can "do" more than short ones (in his theory, program never "does" anything except computing a string), except that longer programs REALLY can generate strings with greater K-values. Note, this goes BY DEFINITION, not because someone wants to belittle the virtues of short programs.

In your post, you provided some arguments to the effect that Kolmogorov theory is not a good theory at all, because different definitions of universal machines or rules of encoding can lead to different K-values for the same string. This is true, but theory considers this, and it's not hard to demonstrate that all K-values measured based on one machine, will change no more than by constant when your switch to another machine. This is a pretty deep and quite consistent theory. [ I encourage everyone to go to Wikipedia and read the biography of Kolmogorov, he was a great mathematician and a great man ]

Similar confusion arises around the use of words "can" and "cannot". In everyday language, expression "event E cannot happen" really means: probability of this event is so low that we shoudn't seriously consider this event as possible, and should behave as if we are sure that it will never happen". This is what we REALLY mean when we say "cannot". We use "cannot" only for brevity (except in math, where "can" and "cannot" have more rigorous meaning). Can it be the while flipping a coin I will get 1000 heads in a row today? In principle, it can happen, why not? But it's very unlikely, and I say:it can't. Same holds for any nonsence: green dragon CAN come and eat me up, or I CAN be kidpanned by extraterrestrials, or whatever. Quantum fluctuation. You never know. And if you ever used expression "can't" in your life, I bet it meant a low probability event, not a claim that you can theoretically prove with absolute confidence.

Not that I like ID arguments very much, and find them 100% convincing. However, some of them make sense to me - at least, they are not 100% nonsense (compared to neo-darwinist's claims, which are).

Jason, now I have a question to you.
Let's assume the Universe is a computer (cellular automation, Turing machine - doesn't matter). But we know that people are busy right now creating quantum computer. They believe this is possible; Wolfram mentions on page 1147 that he was one of the pioneers of this idea. As you know, Shor algorithm was proposed for factoring big numbers - Wolfram mentions this algorithm on same page, and doesn't deny the possibility of it being feasible. However, in a computerized Universe of NKS, there's no indeterminacy, so we have to assume that "inside" quantum computer, there's some "conventional" computation going on behind the scenes, unbeknownst to us. So, it seems that from here we can immediately deduce that factorization is a problem of polynomial complexity
(again, because REAL computation behind quantum computer is done by a Turing machine,
CA or their equivalent). Therefore, when we claim that Universe is a computer, we
implicitly claim RIGHT AWAY that factorization is a problem of polynomial complexity. Isn't it too much to be taken for granted?

I know how hardcore darwinist would respond to this - along the following lines:
Whenever we select a number for factorization, we are not really free to select ANY number. Free will is just an illusion. Even when we believe we selected it freely, in fact it's a result of deterministic process. This process results in numbers that really can be factored in polynomial time. How did we acquire this ability to select these, and only these, numbers? Well, it's a result of evolution. So, although in general we don't know whether any number can be factored in polynomial time, this is certainly true for the numbers we are able to select.

What's your take on this?

Last edited by Vasily Shirin on 11-21-2005 at 12:06 AM

Report this post to a moderator | IP: Logged

Old Post 11-21-2005 12:00 AM
Vasily Shirin is offline Click Here to See the Profile for Vasily Shirin Edit/Delete Message Reply w/Quote
Jason Cawley
Wolfram Science Group
Phoenix, AZ USA

Registered: Aug 2003
Posts: 712

AIT is fine at characterizing complexity for less than universal systems. Things on the non-random side of AIT's divide are just also more complicated than one might expect.

If you fix a family of systems - say CAs, TM would also do if you like - you can ask for the smallest that produces a given sequence, and you will get a meaningful progression as you go from constant to period 1 to period 2 simplicities and the like. The NKS book shows such an analysis on page 1186. But fixing the system is the essential step in this case. Again, the issue is the general computational irreducibility of trying to go backwards from any behavior - string in your terms - to a simple universal system that creates it, without any underlying system fixed. Wolfram discusses this in the note on page 1067. The most relevant section reads -

"even though one knows that almost all long sequences must be algorithmically random, it turns out to be undecidable in general whether any particular sequence is algorithmically random. For in general one can give no upper limit to how much computational effort one might have to expend in order to find out whether any given short program--after any number of steps--will generate the sequence one wants."

There is no reason to privilege any one universal system as a supposed benchmark. And one does not know whether string X appears early or late in the enumeration from short to long initials in system A. Since the test is, is the initial less than the length of X, this sort of matters. Differing by a constant is still differing. When one wants to know how a formal problem scales with number of elements, say, that is not important.

But when the question is, is there any (universal) system and initial for it shorter than X that produces X, the shorter than X part is sensitive to differences of a constant, and the answer thus turns on the (universal) system that produces X most readily. It is in general formally undecidable whether a system-initial pair with initials less than length X produces X. One can't simply plow through all the cases because the system is not fixed, and there are a countable infinity of them. In addition, you have the running time difficulty Wolfram refers to above. (The 4th initial condition produces the string after running for thirty billion eons).

Suppose I make the following test. I give you 100 strings each 1000 bits long. Some of them I will get from decoded artificial sources, say from ordinary language texts, musical scores, and the like. Each encoded to 0-1 bit streams in some perhaps different way from the previous. Others I will generate by a variety of universal systems - perhaps including simple transforms of them, like fixed subsets of their steps or locations etc. A wide variety of them, but each from an initial less than 1000 bits long in its own native formalism. You might get every third step in a stripe at position 351 from steps 2431 through 5431 of a range 2 3 color CA from a 872 bit initial condition, with 2s treated as 0s. And another few score like that but each different, but in every case from an initial less than 1000 bits long.

Do you claim you possess any systematic procedure that will identify the "artificial", "designed" strings (the texts and musical scores - in principle those might also be generated by finite grammars but let them stand for "designed") and the algorithmically generated ones, such that the first are "random" and the second are "simple"? There is no such procedure. The strings you'd like to call "simple" are as complicated as anything in our universe, for all you can tell. Possessing a universal machine will not help you. Testing its behavior from its simplest 1000 bits of initials will not help you, even if you could, which you can't. If you knew the target system then yes you could program your universal machine to evaluate it. You'd still have 1000 bits of initials to plow through which you'd never finish, but you could start. Some of those evolutions you might leave running forever without knowing whether this one will "hit" sometime later. But you don't know the system, and detecting the simplest one - the universal system that in fact can make it from an initial of 872 bits in 5341 steps - is formally undecidable.

Running forward knowing the rule governing the actual dynamics is more than log faster than trying to infer back from string to initial. When the system is fixed you can try its initials in order, from the first to the 2^1000th, if your computer lives that long. When it isn't, you simply can't solve it. If you therefore confidently conclude, my computer in the (say) 2^50 initials I was able to run through before my memory ran out or it died, did not produce this string, and my computer is universal, and differs from each given other by at most a constant, ergo it cannot have been produced by a simple system less than 1000 bits - then I simply pull my hand away and show you the CA evolution, which my universal computer was able to evaluate in "laptop time".

You can call the program simple because its rule is almost trivial (just quite specific within a very large space of trivials) and its initial is shorter than the test string. Where did I get my initial 872 bits from though, aren't they artificial? No, I got them from rule 30 via Random in Mathematica. My rule number too. I can easily stay under the size limit with several such generators "above" the main system. Can one say such behavior could not arise randomly, then? Well, it did.

Now pick any natural system whose supposed artificiality is to follow from its SC, and place a 0,1 signal from it in the batch. Distinguish it from any of the others, please.

The argument from SC was eminently more sensible than that, and did not make such claims. It claimed something much easier to test - that a sequence satisfy some elaborate constraint or exact formal property, while not being purely random in a naive entropy-measure sense, not an AIT sense. This is actually testable, and it is false. I gave some examples, they are all over. Sierpinski gaskets are SC, satisfy an elaborate constraint, and are not pure entropy. But they can be generated without anyone having designed the generator, or asked rule 90 to have that property. The 90th 2 color range 1 CA is very early in an enumeration of CAs, much smaller than the full sequence specifying the gasket.

On QC, multi-way speed up is quite likely to be possible, and it is what I'd expect from what NKS suggests. But multiway speed-up is not countable speed up, let alone continuum infinity (CI) speed up. QC proponents promise lots of things, let's see what they actually deliver. If they did deliver identifiably CI speed up that would be strong evidence against any finite nature position, certainly. It would support QC computational universe types like David Deutsch (it from qubit, rather than it from bit, runs the shorthand). He is also a many-world Wheeler-style QMer, though, which I for one am not.

Then there is the prevalent idea that anything polynomial is easy and doable and anything exponential is impossibly hard. Well, that depends on the powers and coefficients of the polynomial, doesn't it? If the first term is to the hundred billioneth power times googleplex factorial, it being a polynomial will be small comfort. It is a practical engineer's rule of thumb, based on extremely small expected problem sizes. There is no assurance a discrete generator for fundamental physics, if there is one and we manage to find it, won't look like the nasty "polynomial" above.

As for your last paragraph about an imagined argument, it certainly isn't mine, so I wouldn't know where to begin to comment. I don't construct arguments like that.

Getting back to potentially nasty polynomials and their connection to the intractable backward inference problem, in a way they are connected. One of the main points of NKS is that lots of things various past measures have considered easy are not easy at all, and lots of entirely finite things with entirely simple generators are on the output side as complicated as anything anyone will ever solve. You know, in theoretical game theory one occasionally sees "theorems" like, if it is finite it is solved, because you just tree out all the possible games and then prune from the end-states. Well, no. Try playing Go that way. Tying it back to the main topic, saying "cannot" about universal systems, even entirely finite ones with entirely realizable (forward!) computational resources, is almost always going to be wrong. We can't limit their behavior easily like that, it is too rich even as entirely finite and supposedly AIT "simple", etc.

On Kolmogorov, of course he did nice work, and I learned math from his texts, among others. I never got to meet him. Chaitin also helped develop AIT. You can read his recent books (e.g. MetaMath) for his take on NKS, which he has followed with interest. I met him at the first NKS conference. Solomonoff was also involved in developing AIT. I met him at the recent NKS Midwest conference in Indiana. He is working on machine learning.

Chaitin's homepage is here, incidentally -

http://www.cs.auckland.ac.nz/CDMTCS/chaitin/)

I hope this helps.

Report this post to a moderator | IP: Logged

Old Post 11-21-2005 02:33 AM
Jason Cawley is offline Click Here to See the Profile for Jason Cawley Click here to Send Jason Cawley a Private Message Edit/Delete Message Reply w/Quote
Vasily Shirin


Registered: Jun 2004
Posts: 78

I'm glad that we finally agreed on something: namely, if
quantum computer is ever built and its ability to factor big numbers
is demonstrated, then Universe is not a computer. You still make
some reservations concerning polynomial algorithms with
big coefficients, which may, in general, make exponential
algorithm superior for small values - it absolutely doesn't matter
in the context of our discussion. For, complexity (damn it, complexity
again! and again different meaning!) of Shor algorithm
is known to be O((log n)^2), and coefficients are not big at all.
And if it really factors numbers with performance like this,
you can counter only by providing conventional algorithm
having THE SAME computational complexity K*(log n)^2.
This is going to be really hard. If you find such algorithm
(no matter how big the value of K is), it will be the single most
important discovery in math.

It looks like NKS has a strong opponent. It's not ID, not religion,
not even Kolmogorov theory. It's QM. It's very easy to fight ID - everybody
is beating these poor guys anyway. Why aren't you fighting QM instead,
for this is a REAL enemy.

I, for one, share your skepticism about the possibility of creating
quantum computer. But I had the same kind of skepticism about
quantum cryptography (it's based on fishy and politically incorrect notion of
entanglement) - and people now are already selling commercial
systems for it. It's dangerous to bet against QM guys these
days - they have already beaten so many opponents. So, maybe -
just maybe - they will be right in this particular instance, too?

Note that quantum computer may deal a major blow not only to NKS,
but to the whole materialistic ideology, which, till this day,
remains based on mechanistic ideas of total determinism.
What's curious is that QM is known for at least 80 years,
but the mainstream "scientific" ideology basically ignores it till this day.
Many people continue to cherish the hope that somehow, inside
quantum, some kind of COMPUTATION is going on - wheels are rotating,
buttons get pushed, zeros get XORed with ones...

The possibility of quantum computer running Shor algorithm is a
major intellectual challenge for deterministic school.

My question is: is this school preparing a plan of defense? Is
there any discussion on this issue? Do they really understand all
ramifications of Shor algorithm?

Report this post to a moderator | IP: Logged

Old Post 11-21-2005 05:03 PM
Vasily Shirin is offline Click Here to See the Profile for Vasily Shirin Edit/Delete Message Reply w/Quote
Jason Cawley
Wolfram Science Group
Phoenix, AZ USA

Registered: Aug 2003
Posts: 712

Who wants a defense? If the truth is over that-a-way, that-a-way we go. My proviso about QC, though, was that multiway speed up is evidence for non-classicality but not for non-determinism. Real continuum infinities are quite different from sampling 4 or 16 ways an event could happen. The latter would be perfectly compatible with an underlying finite generator for QM. If that finite generator is involved enough and operates on a small enough scale compared to our observables, it might support plenty of QC effects not expected classically, without the full integration over an infinity of possible paths expected by our existing QM formalism. But NKS is not wedded to classicality; on the contrary.

Wolfram would like to recover determinism if possible, but perhaps by giving up things like locality. Personally, I'm not wedded to determinism. It is a good epistemological principle and I think we should make determinist guesses and see if they can be made to hold up. Wolfram thinks it is more fundamental to science than that and does expect it can be recovered with the right underlying discrete generator for QM. I think it'd be great if he is right but don't pretend to know.

Incidentally, I don't know why materialism comes in. Computational ideas are about purely formal relationships, it doesn't matter what is being related. That is what I meant earlier when I spoke of formal thinking as a place realist and idealist thought can meet. Materialism may be a popular form of realism but it does not exhaust it. An objective reality out there whose formal relations govern events is quite sufficient. NKS isn't wedded even to that, it has few philosophic commitments (it might be interpreted as being about what we can know not what is, e.g.). Various NKS researchers undoubtedly have more, including Wolfram, but philosophic additions are optional extras for our intuition and our noodling about big questions, not essential to doing the science.

Report this post to a moderator | IP: Logged

Old Post 11-21-2005 10:22 PM
Jason Cawley is offline Click Here to See the Profile for Jason Cawley Click here to Send Jason Cawley a Private Message Edit/Delete Message Reply w/Quote
Vasily Shirin


Registered: Jun 2004
Posts: 78

<Quote>If that finite generator is involved enough and operates on a small enough scale compared to our observables, it might support plenty of QC effects not expected classically, without the full integration over an infinity of possible paths expected by our existing QM formalism</Quote>
If I understand you correctly, you hypotesise that some invisible bee flies from one qubit to another collecting honey from each of them, and that it somehow visits all 2^N states; all this happens in a frozen observable time, and behaviour of a bee can be described by some algorithm?
This is a reasonable idea, Einstein was one of the people who was looking in this direction, but unfortunately neither he nor anybody else could suggest any intelligible model for this. People familiar with the subject (including Feinman) eventually lost any hope such model is possible. Do you have any update on the state of the art? For, I believe, this is a central problem of all modern science; I read many popular (and less popular) books on the subject, understood nothing, and feel highly intrigued. I even tried to think about it myself, but all my efforts so far produced nothing but terrible headache.

Report this post to a moderator | IP: Logged

Old Post 11-24-2005 03:48 AM
Vasily Shirin is offline Click Here to See the Profile for Vasily Shirin Edit/Delete Message Reply w/Quote
Vasily Shirin


Registered: Jun 2004
Posts: 78

In this post I want to provide arguments supporting the following statement: "Intelligent Design is a premise of NKS". Please note that I don't want to prove that either of those theories is correct, just that the latter is based on the former.
Here goes the argument.

Observe that NKS makes 2 basic claims:

A) Simple programs can lead to complex behaviour (in more explicit terms, short programs can be universal)

B) our Universe is a computer.

Observe further that claim B is somehow deduced from A. I.e., Wolfram not just so asserts that Universe is a computer because he likes it to be that way, but BECAUSE earlier he discovered the said property of short programs.

I've given a considerable amount of thought to this deduction. At the first glance, there's no obvious connection between two statements. How shortness of universal programs can help here? And what would happen if it universality could be achieved only by long programs? Let's say, the first rule that enjoyed the property of universality would be not 110, but 314159264? Would it make claim B look less likely? Something is definitely missing here; my task was to reconstruct these missing links. After a couple of sleepless nights, I was able to come up with some reasonable theory as detailed below.

First, we should answer the question: what is the advantage of short programs over long ones? I can think of only one advantage: when we write a program, there's no point of writing a long program where short one can do the job. This is one of the main principles of software design. Of course, there're exceptions, and short cryptic program can be less maintainable, but, if you avoid extremes, this is a general rule. Similar rule holds in science and any kind of engineering. Really, why bother working hard?
On the other hand, what would happen if we can't find a simple solution? Maybe, we have to implement a complicated one? It depends. Sometimes we do, but no one is happy with it: high expenses, low maintainability and very big risks are guaranteed. And experience shows that eventually, project may very well fail completely due to complexity. Every manager knows: if things become complicated, it makes sense to reconsider requirements, drop some of them, and implement just whatever is simple enough to be practical. The moral of the story is: short program is definitely a good thing, whereas long program is certainly a bad thing. And had we had a need of creating Universe ourselves, we would probably use the same design principle.

What we established so far is: if (note that IF!) program is designed by intelligent agent, it has very good chance to be as simple as possible (and no simpler). And because the only requirement Wolfram has to Universe consists in its ability to support universal computations, then Universe SHOULD be a simple (!) program running on simple(!) computer. And Wolfram proves that such simple program and such simple computer really exist!
Therefore, in order to deduce B) from A), we need 2 additional axioms:
1) Universe was designed by some agent who, in his work, applies the same engineering principles as we do
2) hadn't sufficiently simple universal programs been found, it wouldn't make sense to bother creating the Universe at all due to associated risks and costs, as outlined above.

This completes the proof.

REJECTED ALTERNATIVES
---------------------

Other ways of deducing B) from A) were also considered. In particular, I examined the idea of connecting these two statements by ways not involving the notion of design. My plan was to try to demonstrate that small program can emerge by itself (e.g., as a result of quantum fluctuation or other inexplicable phenomenon), whereas same event resulting in longer programs would be increasingly unlikely. This plan was eventually abandoned upon realization that the difference between no program at all and the simplest of programs is much bigger than the difference between simple program and complicated one; to make things worse, I couldn't explain where the hardware came from - and if we factor in the hardware, the whole setup doesn't look that simple. Also, if we allow software and hardware to emerge from nowhere, it's not obvious that simple things are more likely than complicated ones, unless our ability to understand what's going on is taken into consideration as one of design goals, but then we are again talking about design in contradiction with our premise.

ACKNOWLEDGEMENTS
----------------

I want to thank contributors of this thread who provided the basic idea of possible connection between principles of technical design and foundations of NKS.

Report this post to a moderator | IP: Logged

Old Post 11-29-2005 03:54 AM
Vasily Shirin is offline Click Here to See the Profile for Vasily Shirin Edit/Delete Message Reply w/Quote
Jason Cawley
Wolfram Science Group
Phoenix, AZ USA

Registered: Aug 2003
Posts: 712

Mathematica isn't written in Rule 110. It is a million lines of code. The premise about designed systems being simple is false.

Moreover, a major point of NKS is that the loose association of "computers" with "artificial, designed" is in fact false. It is an artifact of the historical sequence in which we became aware of the phenomenon of computation. We became aware of it through engineered, artificial systems first. But NKS shows us that nature has been computing all along.

It didn't need Turing to explain what universal computation was. We did, to notice it was possible. After we got familiar with the phenomenon in engineering contexts for a few decades, we noticed similar things happening in nature. And finally realize all our engineering merely exploits formal facts equally true of the natural world, and already exploited within it. The association, "computer" - "artificial" formally does not follow. It is a habit formed from an historical process of discovery.

Any idealist can cheer that formal realities lie behind the equivalence. Being purely formal or math-like, the facts about computation are true in all possible worlds. They are therefore not evidence about anything proper to the empirical one, exclusively.

Anyone who for entirely distinct reasons, and independent of supposedly necessary deductions from evidence, likes the idea of designed universes, can speculate all he likes about it, and can notice it is compatible with a universe that computes. He might miss part of the point of NKS if he does not see how general its formal discoveries are, but that is up to him. But if he claims NKS is evidence for this, he overstates the case. It is nothing of the kind. Nor does it depend in any link in its argument on any such idea.

Report this post to a moderator | IP: Logged

Old Post 11-29-2005 01:35 PM
Jason Cawley is offline Click Here to See the Profile for Jason Cawley Click here to Send Jason Cawley a Private Message Edit/Delete Message Reply w/Quote
Vasily Shirin


Registered: Jun 2004
Posts: 78

What's your definition of intelligence then? As it follows from your posts, just everything we know could be generated by a program (including, but not limited to, 9-th symphony). Therefore, intelligence cannot be defined in any consistent way, we cannot even be sure it exists at all.
But when we are looking for extraterrestrial intelligence (e.g. SETI project), we are ready to use some criteria to identify it. My point is: no matter what criteria we use, it can be applied to whatever we see in nature (e.g. DNA), and conclude it satisfies the same criteria. Please comment on this.

Report this post to a moderator | IP: Logged

Old Post 11-29-2005 03:04 PM
Vasily Shirin is offline Click Here to See the Profile for Vasily Shirin Edit/Delete Message Reply w/Quote
Jason Cawley
Wolfram Science Group
Phoenix, AZ USA

Registered: Aug 2003
Posts: 712

Wolfram addresses the subject starting on page 822. He thinks in the end general definitions of intelligence based on some single specific set of criteria (and in particular a capacity for sophisticated computation) will not work, and "any workable definition of what we normally think of as intelligence will end up having to be tied to all sorts of seemingly rather specific details of human intelligence."

Computational ability is yet another feature that we might have thought special to humans that in the end simply isn't, and that is shared by all sorts of other systems we would not normally regard as intelligent. And the actual marks by which we pick out what we do regard as the separate category "intelligence", in the end go back to particular details of our history. Which probably need not have been as they were, to result in some approximation to intelligence as we recognize it. This is likely to make SETI recognition fundamentally difficult, far more difficult than naive early SETI enthusiasts thought. That is the upshot of that section of the book.

Personally, I consider intelligence something of a mixture, of computational sophistication shared with many other systems I would not call "intelligent", and consciousness shared with a much narrower range of other animals, that I doubt are all that computationally sophisticated - certainly compared to us. I like to call the first "cleverness" and the second "consciousness", and then I think about intelligence as lying in their intersection. Deep Blue plays chess so well I have no doubt whatever that it is clever. Other mammals are to me quite obviously conscious (how low in the orders that goes might be debated, but that is happens well "before" us I think is obvious - consider dogs e.g.), but some of them are probably pretty dumb by our standards (and pretty predictable as a result, in specific enough situations at least) - though perhaps clever enough by those of the rest of the biological world.

Others in the cybernetic tradition like to define intelligence in terms of goal directed behavior and adaptation. A thing is intelligent if it changes how it behaves in order to get what it wants or needs, runs one formulation. Dennett offers a hierarchy of categories in terms of our usual assessments, purposefully staying subjective about them, so as not to beg questions about what is objectively necessary or sufficient for each of them. Some systems we explain basically mechanically, others we ascribe internal states to in order to explain apparent variation in behavior with different inputs, and last some we explain by ascribing changes in their internal states or opinions, altering not just their specific responses but patterns of those. Effectively this is sensitivity analysis on system behavior, but viewed as a subjective modeling problem.

Whether our intuitions in such matters are justified he treats as an open question. Certainly in practice we make use of teleological explanation in such cases, and it is hard to imagine e.g. trying to explain a trading floor (to pick a typical economic example) without reference to the specific intentions of its various actors.

When we have lots of common history with the agents, it is clear enough how this works. We can mimic their internal states sympathetically, and thereby get important guidance about what they are doing and will do. The old positivist-behaviorist attitude that all explanation should be reducable to objective description is clearly just false, in practice, in such cases. In NKS terms, we should not expect the calculations involved to be readily reduced, and should expect we will have to emulate the underlying system step by step to figure out how it will behave - with its internal process of calculation critical and arbitrarily complex.

Wolfram thinks most systems capable of sophisticated computation have common properties we are used to from intelligent examples. He is less sympathetic than I am to the idea some extra specific ingredient is involved (my "consciousness" above e.g.). You can read that as seeing intelligence everywhere. You can also read it as saying intelligence naturally occurs because it simply isn't that hard to achieve, or is less of an "accomplishment" than many suppose.

Report this post to a moderator | IP: Logged

Old Post 11-29-2005 06:03 PM
Jason Cawley is offline Click Here to See the Profile for Jason Cawley Click here to Send Jason Cawley a Private Message Edit/Delete Message Reply w/Quote
Vasily Shirin


Registered: Jun 2004
Posts: 78

That's what I expected to hear: intelligence cannot be defined in
computational terms. This notion is so blurred that we better
eliminate it from our reasonings about the nature of Universe.
(And, by the same token, we should eliminate
such phenomena as consciousness, emotions, sensations, etc. -
we can't provide computational definitions for them,
and don't even care - because all of the above correspond
to some combinations of bits and don't bring any ADDED VALUE
into discussion, they only obfuscate it). By eliminating
these notions, we, among other things, can avoid unpleasant
questions like: does program executing in my computer FEELS
anything? Why universal rule running in Universe leads
to configurations of cells that feel pain, and same rule
running on PC doesn't? Etc. Etc. - all this questions are
meaningless: no matter whether it feels or not, its
behaviour for outside observer will remain absolutely the same,
as it can be described totally in terms of zeros and ones)
This is a viewpoint of NKS. I know, your personal views can
be different, you mentioned it, but this is primarily a discussion
of NKS; your views are not known to me in their entirety, so it
wouldn't be appropriate for me to argue against something I don't know.

Upon reading NKS, I had a feeling that things don't quite add up
there. I'm talking about philosophical concept, not about mathematical
content (there's a number of nice results there, but it's not for
me to judge how important they are). Eventually, I was able to
formulate the point. NKS is all about short programs and their
wonderful properties. There's no doubt about this. The main
philosophical conclusion of the book is that short programs are
so powerful that we don't need to look for any other mathematical
apparatus to describe the world, and even more than that: our world
IS a program (Wolfram goes on to predict the "code" of this program will be
found shortly). Interestingly, Wolfram uses expression "simple" programs
whenever he talks about short programs, and expression "simple programs
lead to complex behaviour" is reiterated in different forms a great many times
in the Book. And I asked myself: why is this emphasis on simple (short)
programs; what's so special about them that justifies so far-reaching
philosophical conclusions?
Let's approach the issue from another direction: if it weren't for
"simplicity" of these programs - would NKS ever be written? For example,
Wolfram would make the same investigation and found that all first
10 million rules lead to repeating patterns or blank screens, and
only then would he find something non-trivial - say, in rule
314159264. And proved this rule is, in a sense, universal (considerable
efforts are still required to map this universality to universality of
Turing machine, but let's not focus on this right now). Unfortunately,
this discovery wouldn't impress anybody (universal Turing machines
are known all along), and more importantly, it wouldn't impress
Wolfram himself enough to justify further quest.
In other words, what was the ADDED VALUE brought by NKS?
Certainly, it has something to do with SHORT (SIMPLE) programs.
Wolfram doesn't say: based on results on Turing, I concluded
that Universe is a computer. NO! He says: based on MY RESULTS!
There's certainly something in short programs that is absent in long
ones, and this "something" is exactly what leads Wolfram to his
conclusions.
And what is this magical property of short programs that is missing
from long ones, and makes them a subject of 1000-pages' book?

I provided my arguments connecting this "something" with Wolfram's
unconscious (or maybe, very conscious? who knows?) belief in intelligent
design, and it will be redundant to repeat them here.
And what was your response, Jason? In vain was I looking for
your version of explanation as to what makes simple (short programs)
so special for Wolfram (this was my main question)
Instead, you built a straw man argument, arguing against some
point I never made on behalf of myself - I just tried to reconstruct
WOLFRAM's logic that lead him from short programs to Universe-as-computer
concept (you could easily notice grains of irony embedded in this "reasoning")
And yes, it comes out a bit naive, a bit circular, etc - but
what else would you expect from reasoning that was never EXPLICITLY
MADE by Wolfram? Again, it's my theory of Wolfram's unconscious
motivations. This theory is based on observation that notion of simple
(or short) program make difference only in connection with intelligence.
However, according to your last post, the notion of intelligence is blurred
and cannot serve as a basis of philosophic arguments about Universe at all.

You don't like my theory?
Fine, propose your own way of concluding that universe is a computer
based on statement "simple programs lead to complex behaviour".
And note: your reasoning should UTILIZE the FACT OF SIMPLICITY of these
programs. Propose something that is true for short programs, and not
true for longer ones.

I think I made a strong case. Had we had to present our arguments
before the jury, I think I would have a good chance to win.
(You can verify this statement by letting some of your friends
read my last 3 posts. I just beg you: no biologists should be involved,
these people are lacking any logic and common sense)

Report this post to a moderator | IP: Logged

Old Post 12-01-2005 06:38 PM
Vasily Shirin is offline Click Here to See the Profile for Vasily Shirin Edit/Delete Message Reply w/Quote
Jason Cawley
Wolfram Science Group
Phoenix, AZ USA

Registered: Aug 2003
Posts: 712

Now you are just making things up.

On pain etc, it is obviously distinct from intelligence, or from computation, and involves dedicated systems like nerves and such, very much a part of our particular history. It is clearly present in various animals not renown for their mental power. If you want to regard consciousness as distinctive of animals you are perfectly free to do so, but computation is something distinct from it, and goes on in plenty of systems that do not share those other features. It amounts to a loose association.

Does my computer feel jazzed when it crunches numbers? No, it doesn't have any nerves or endorphins or... It is a silly question.
You don't explain such features of living things by abstracting from their biology, when those features are clearly caused by that biology. My computer does however get the right answer, in the same way anyone decent at math might. The latter is computation. If you want to understand how the other systems (pain, feeling etc) work you have to look directly at them.

As for what simplicity brings, it is entirely plausible that simple enough interrelations will arise without any design involved. If you shake up a modest sample (100s, 1000s) each of 3 things that can each stick together 3 ways, you will hit all of them. That is not true of much more complicated rules.

If programs that produce arbitrarily complex behavior are reasonably common among the simplest allowed relationships, number of elements, etc, then it is entirely plausible natural systems will just happen to fall into such relations, that they will "hit" some complex rules out of the space of possibles without any extraneous explanation (selection effects or whatever) required. If you had to go out to the hundred trillioneth or 10^60th, it would be much less plausible that an instance just falls out that way.

On the discovery and methodology side, we can enumerate classes of programs this simple, and just try all of them and see what they typically do. We cannot readily examine all possibles out of program spaces in the 10^600 range, so we won't ever find one that behaves in target way X just by looking for one. And we will expect complex phenomena to be common, even in systems lacking other supposed prerequisites for complexity - in turbulent fluids, vaccum breakdown, drainage, the growth of crystals, etc. Well below the threshold of selection, let alone of artificiality.

I've been very patient with you, but by now you are making up arguments other than mine you'd rather talk about, and pretending to put words in other people's mouths about NKS. I'm about done.

Report this post to a moderator | IP: Logged

Old Post 12-01-2005 09:39 PM
Jason Cawley is offline Click Here to See the Profile for Jason Cawley Click here to Send Jason Cawley a Private Message Edit/Delete Message Reply w/Quote
Latyshev


Registered: Dec 2005
Posts: 6

hmm...

Maybe bring discussion back to the topic a bit...

You cant prove existance of ID.
You cant disprove existance of ID.

Hence i claim that ID doesnt exist.

PS Your discussion is great guys, but it looks like you went to far in the air, loosing ground beneath your feet.

Last edited by Latyshev on 12-02-2005 at 10:24 AM

Report this post to a moderator | IP: Logged

Old Post 12-02-2005 10:04 AM
Latyshev is offline Click Here to See the Profile for Latyshev Click here to Send Latyshev a Private Message Click Here to Email Latyshev Edit/Delete Message Reply w/Quote
Vasily Shirin


Registered: Jun 2004
Posts: 78

Hey, Latyshev,
your post reminds me the beginning of Bulgakov's "Master and Margarita". It would be refreshing for you and other contributors to this forum to re-read the novel. It's impossible to discuss this issue without bringing up some literary associations.

Report this post to a moderator | IP: Logged

Old Post 12-02-2005 05:00 PM
Vasily Shirin is offline Click Here to See the Profile for Vasily Shirin Edit/Delete Message Reply w/Quote
Vasily Shirin


Registered: Jun 2004
Posts: 78

OK, back to discussion:

<Quote>As for what simplicity brings, it is entirely plausible that simple enough interrelations will arise without any design involved. If you shake up a modest sample (100s, 1000s) each of 3 things that can each stick together 3 ways, you will hit all of them. That is not true of much more complicated rules </Quote>

According to Wolfram, Universe is a computer running some rule. This initial rule is a concrete rule, right,
like 110 or 12345? Do you really mean that this initial rule was produced by shaking something up? And if
yes, why the shaker was limited by just 3 attempts? OK, there was no shaker, it was shaking itself
(not clear how: this would require another program, but we already agreed that we are talking about
initial program). Still, unclear why only 3 or so attempts were possible. why not 10^80? Why not 10^(10^80)?
And hardware, what about hardware? For, even the simplest of programs wants some hardware for execution.
Was it also produced by shaking? Then WOW. (By the way, how simple is really the hardware required for
execution of simplest program? How many bits we need to describe this hardware? And what device will execute this description to really produce this hardware?). All those are hard questions, Jason, there's
clearly a problem of bootstrapping here.

So, neither I, nor, as I believe, other readers of this forum can accept your explanation.

I made things up, so what? I clearly stated I'm making them up, and challenged you to provide a better
explanation to some phenomena. Now, following your protests, I'm temporarily withdrawing my made-up
story. I promise to tear it to pieces as soon as I receive a satisfactory explanation about role of SIMPLE programs in a process of bootstrapping. I'm looking for this explanation for my entire life, and I will be
really grateful if you provide one.

How does all this relate to ID? Well, you can look on ID as attempt to explain bootstrapping. I can even imagine that there exist some supporters of ID that are ready to believe in "Universe as computer" idea,
and they implicate Designer only to explain the origin of Computer and its Program.

As for my question about pain - it's not silly at all. Actually, this is the #2 question in my personal
list of transcendent questions. Endorphins? Well, apparently, Wolfram doesn't share your views. Although,
I'm afraid you will again accuse me in making things up, so let Wolfram speak for himself:
<Quote>
[ page 1100] From looking at the brain one might guess that parallel or other non-standard hardware might
be required to achieve efficient human-like thinking. But I rather suspect that - much as in the analogy
between birds and airplanes - it will in the end be possible to set up algorithms that achieve the same basic functions but work satisfactorily even on standard sequential-processing computers

[ page 825 ] But just as in case of intelligence, I believe that no reasonable definition [ of life ]
can actually be given. Indeed, following the discoveries in this book I have come to conclusion that
almost any general feature one may think of as characterizing life will actually occur even in many systems
with very simple rules.
</Quote>

There're great many places in the book where Wolfram expresses similar ideas (look up, for example, "artificial intelligence" for more) So, what about
emotions - aren't they part of our thinking? I started to look for explanation of emotions in NKS,
and found none. Probably, they are not important for either life or intelligence or thinking. I tried to
search other words: feeling, sensation, consciousness - found nothing. In one place, Wolfram promises
to discuss consciousness, but apparently this plan was not implemented. It's hard to believe, but this
monumental book, which can compare in its scope only with Encyclopedia Britanica, never mentions these notions.
On the other hand, some keywords are used extensively:
simple - 1360 occurences
complex - 300 occurences
simple rule - 344
behavior - 745
complex behavior - 188

And no feelings, sensations, emotions - nothing. No consciousness either. Just BEHAVIOR.
So, let's conduct a thought experiment: suppose Wolfram's dreams came true, and AI is created in a regular
computer. And this computer simulation creates some individual that takes part in our forum. And
suppose this individual writes:
"I've been very patient with you, but by now you are making up arguments other than mine you'd rather talk about, and pretending to put words in other people's mouths about NKS. I'm about done"
How should I feel about it? Should I feel sorry for making somebody so angry? But if I deal with
computer simultaion, it cannot be angry, Jason assured me some endorphines are needed for this,
so I shouldn't feel sorry, it would even be stupid to feel sorry... On the other hand, according to
Wolfram, I'm a computer simulation myself, so ... things are getting really complicated here.

Can one really think without emotions, feelings, consciousness, and demonstrate the same BEHAVIOR?
Can we still call it LIFE, as Wolfram suggests? Doesn't LIFE imply that we should FEEL something?
And can anything be achieved intellectually without feeling? For example, could NKS be written
by a computer? Well, I don't know. There's really almost no emotions expressed there. Except one:
PRIDE. But maybe - just maybe - no book would ever be written without some dose of it?
This is is issue even Intelligent Design can't explain.

Report this post to a moderator | IP: Logged

Old Post 12-04-2005 04:37 AM
Vasily Shirin is offline Click Here to See the Profile for Vasily Shirin Edit/Delete Message Reply w/Quote
Vasily Shirin


Registered: Jun 2004
Posts: 78

"The best lesson life has taught me is that the idiots in many cases are right" -W. Churchill

Report this post to a moderator | IP: Logged

Old Post 12-08-2005 03:24 PM
Vasily Shirin is offline Click Here to See the Profile for Vasily Shirin Edit/Delete Message Reply w/Quote
Post New Thread    Post A Reply
Pages (3): « 1 2 [3]   Last Thread   Next Thread
Show Printable Version | Email this Page | Subscribe to this Thread


 

wolframscience.com  |  wolfram atlas  |  NKS online  |  Wolfram|Alpha  |  Wolfram Science Summer School  |  web resources  |  contact us

Forum Sponsored by Wolfram Research

© 2004-14 Wolfram Research, Inc. | Powered by vBulletin 2.3.0 © 2000-2002 Jelsoft Enterprises, Ltd. | Disclaimer | Archives