wolframscience.com

A New Kind of Science: The NKS Forum : Powered by vBulletin version 2.3.0 A New Kind of Science: The NKS Forum > Pure NKS > Falsifiability
  Last Thread   Next Thread
Author
Thread Post New Thread    Post A Reply
Randolph Revoredo


Registered: Oct 2005
Posts: 4

Falsifiability

Sthephen Hawkins says that if black holes finally proves that doesn´t exist at all, all his work is wasted. Einstein said the same about relativity when predict some unexpected star lighting curvature on solar eclipses (later verified for special -not general- relativity theory).

I asked myself if Karl Popper assertions about science make sense here, and thought: if the computational equivalence is a scientific theory (as it is), then it should have any kind of "critical test" that let prove that all this (brilliant) work might be a failure.

My answer was: yes, the yet undiscovered "simple basic rule" that explain the behaviour of the whole universe is the critical test of Computational Equivalence Principle. If not found.

Am I right?

RR

Report this post to a moderator | IP: Logged

Old Post 10-01-2005 06:41 PM
Randolph Revoredo is offline Click Here to See the Profile for Randolph Revoredo Click here to Send Randolph Revoredo a Private Message Click Here to Email Randolph Revoredo Edit/Delete Message Reply w/Quote
Jason Cawley
Wolfram Science Group
Phoenix, AZ USA

Registered: Aug 2003
Posts: 712

One can certainly apply Popper's philosophy of science to NKS. I also think NKS suggests some amendments to that philosophy of science in a few places. And one can analyse the PCE claim in particular and ask how it might wind up being wrong, in order to determine its empirical content as it were, and to verify the fact that it "sticks its neck out" and makes real claims about the outside world.

But I don't think the idea of a single simple fundamental rule is the critical prediction of PCE. They are logically separable, in the sense that either might hold without the other. The "rule for the universe" (RU for short) claim may be the most striking that Wolfram advances in NKS, but it is not the backbone of the logical argument, nor is finding it entailed by PCE.

To see this we have to look in detail at the PCE claim, see what it is about and what it is trying to explain, and then see how that explanation might fail, empirically. Which I will do below. But first to clear away one issue that might give rise to misconceptions at some point, I want to explain the possible amendments or extensions to Popper, that I alluded to above.

There are some parts of pure NKS that are not falsifiable, any more than most of mathematics is falsifiable. That a given CA from this initial condition behaves in exactly this manner, is a formally required, logical fact. It has the status of a mathematical construction, like a geometric diagram. Before one has made the construction, the relations it makes plain might not have been apparent. But once it is made and shown (and, to be sure, checked as proofs in math have to be checked) it isn't going to be wrong under any possible empirical state of affairs.

In NKS we make far greater use of an experimentalist's methods, even in areas where the results (once found) have the above character, than has been customary (at least in recent times) in most of traditional mathematics. There are other parts of NKS that have the character of mathematical conjectures. The curious thing about formal conjectures from a Popperian point of view, is they appear unfalsifiable after the fact (since logically or formally entailed, if true), but still "stick their neck out". Because they might be wrong from the standpoint of the one making the conjecture, knowing only what he knows at the time the conjecture is made.

Popper effectively wanted to rely on something not being logically entailed, even after the fact, to separate formal from empirical truths. But a formal conjecture, before the computation (or proof) that settles the answer, has all the characteristics of an empirical hypothesis, except logical falsifiability after the fact. Popper was implicitly assuming that the logical entailments of formal premises would be simple or obvious. But that is in general not the case, and in NKS it is rarely the case.

Usually, in pure NKS, even if we know exactly how a given formal system works (its formal description, rule, code that evolves it etc), we do not know what it will actually do until we perform an elaborate formal calculation. Having the program is not having its output - between stands a mass of non-trivial calculative work. And the number and variety of systems to consider, renders it completely impractical to perform even a minute fraction of these calculations explicitly. We pick which formal experiments to perform, to learn as much as possible about other similar ones, because we know we can't perform them all. We necessarily rely on conjecture to extend the limited space of calculations we can explicitly perform.

Thus my proposed amendment to Popper, as follows. Even in formal contexts, where after the fact a definite calculation can show that some consequence necessarily follows from some set of premises, if before performing that calculation you had no way of knowing what that outcome would be, then conjectures about that outcome are meaningful science, "effectively" falsifiable. In other words, forget about any imaginary Platonic standpoint outside time in which all logical or formal truths are trivial, because we don't live there. An involved enough formal truth has a status akin to an empirical truth, and the methods used to discover them can and in practice do include the inductive guesswork, falsifiability set up, that Popper describes for empirical science.

Now, none of that means that applied NKS doesn't face the same issues of reaching the empirical world, as opposed to merely formal truths, that any other theory faces. It does. You can have a nice formal theory that has all kinds of nice formal characteristics, but happens not to be the formal relationship actually going on in empirical system X or Y. For formal to empirical correspondance, the usual Popperian machinery applies. The only quibble here is that the consequences of a complicated enough formalism may not be trivial to work out, for the reasons described previously. Formal experiments may be needed to tell you what your model predicts, because that may not be obvious simply from the construction of the model, alone.

All of that was an aside about amending Popper - in ways I hope are friendly, incidentally. Now to PCE, what it is trying to explain, and how it might wind up being wrong.

PCE is trying to explain the phenomenon of complexity. It is a thesis about the essential, underlying cause of certain things appearing complex to us, while others do not. PCE claims there is something real and essential involved, not simply detailed and contingent limitations of our thinking, formalisms, or apparatus. And it claims complexity has a single essence that marks it out, not a bunch of different causes from case to case.

More, it tell us what that cause it supposed to be. Almost all cases of empirically or formally seen complexity, are supposed to arise because the systems involved are as computationally sophisticated as anything in our universe, or because they all have the finite analog of universality (or would be universal in the strict mathematical sense, in suitably idealized extensions or limits). In short, PCE tells us universality is the cause of complexity.

Now, this could easily be wrong. It might be the case that most complicated systems lack sufficient computational flexibility to be capable of universal computation. I can advance a plausible alternate thesis, to distinguish PCE and how much it is claiming. NKS has shown us the category of computational irreducibility - when there is no way of compressing a computation, significantly more efficient that emulating it step by step (up to fixed factor translation,1 step for 2 e.g. or one-off transforms of the data etc). We can see, logically, that universality entails irreducibility in at least some portions of the cases, or some possible evolutions of a given universal system. But the reverse is not the case. Irreducibility need not entail universality.

So, suppose it turns out that the cause of apparently complexity is that most complex looking systems are indeed computationally irreducible - but that they are below the threshold of universality, and not capable of supporting universal computation. Then NKS would still have lots to say to us about such systems. Many of its methodological innovations stem from irreducibility - the need to simulate e.g. But PCE as stated, would be wrong. Only some limited subset of complex looking systems would be universal. Universality would be a special case inside a larger category of irreducibles, and that special case might - empirically - be rare and unusual rather than everywhere and normal.

PCE is sticking its neck out, even compared to the claim that almost all apparently complex systems are computationally irreducible. It is a distinctly stronger claim than that, one that could much more easily be wrong.

Even a weaker "complexity is caused by irreducibility" claim might be false, empirically. We can find some formal systems that can be fully analyzed (reduced to a short formula, "compiled", short-cut) that on the surface look complicated, with the full analysis not obvious but in the end possible. I call those "cracks", as in cracking codes or safes. A given apparently complicated system might not be irreducible. It might be "crackable", instead.

Thus a conjecture that most empirically seen cases of complexity will prove computationally irreducible, is a falsifiable claim. It is possible, instead, that most empirical cases of complexity have "cracks" that we just haven't found yet. I don't think so. I think "cracks" are special cases, arising on the borderline between simple behaviors and irreducible ones. But it is logically possible most cases of complexity have short formula "cracks", i.e. are reducible, and it can be shown for this case or that, by finding the crack and presenting it.

That covers a number of ways PCE could wind up being wrong. Next I'll show that PCE and RU are separable. First, PCE might be true and RU false. How? Clearly from the first we know universal computations happen in nature and so the laws of the universe must support them. But nothing in that alone says, they must be compressible to a single simple rule. The laws of nature might instead be a relatively complicated rule with many subclauses. That would not invalidate PCE. We know of complicated formal rule systems with many subclauses that are capable of universal computation - higher level computer languages, for instance. Suppose the rules of the universe are like a million lines of complicated code, capable of supporting universal computation. Then PCE might hold and RU fail. RU is adding something additional, in this case.

Second, RU might hold and PCE might fail. How? If RU holds, there is a simple rule that gives rises to all the laws of physics. Fine. We can readily assume the RU in question is fully capable fo supporting universal computation. Universal computation occurs in the universe. No problem. But most instances of complexity around us, might be only irreducible, and not universal. Nothing says every subsystem must include all the computational capacity of the full RU.

A system capable of universal computation can emulate any finite algorithm. Simple ones as well as complex ones. Maybe almost all the apparently complicated systems we see, are all simple but well-hidden "cracks", in terms of the RU code. Just as a universal computer can do something simple like add 2 and 2, it can do something reducible but not easily analyzed from the results alone, like evolve rule 90 for a number of steps in the set 2^n, n an integer.

Here is another way RU might fail while PCE holds. It might be the case that QM (or QFT) is not an abstraction or idealization of some underlying generating system, but simply true at some base level. The most primitive rules of the universe might not even be single-valued aka deterministic, let alone a simple deterministic rule. But it might still be the case that almost all the aggregated systems we see that exhibit complexity, are capable of supporting universal computation. NKS would be wrong about fundamental physics but right about the basic cause of complexity.

So either RU or PCE might hold without the other holding. Ergo, RU holding or not cannot be the crucial test of PCE. Instead, PCE stands or falls based on whether complex systems on the one hand have simple, reducible explanations, or irreducible ones not past the threshold of universality, or on the other hand are systems capable of supporting universality (with suitable idealizations about limits etc), or are as close to it as any system implimentable in our universe (to cover finite nature cases e.g).

Excellent question by the way. It is a topic I regularly cover in philosophy talks at the NKS summer program. I hope this is useful.

Report this post to a moderator | IP: Logged

Old Post 10-01-2005 11:07 PM
Jason Cawley is offline Click Here to See the Profile for Jason Cawley Click here to Send Jason Cawley a Private Message Edit/Delete Message Reply w/Quote
Jason Cawley
Wolfram Science Group
Phoenix, AZ USA

Registered: Aug 2003
Posts: 712

Here are some examples of formal propositions that one might try to prove or disprove, that would constitute evidence for some of the alternatives above.

(1) Find and provide "cracks" for "famous" class 3 CAs (e.g. rule 30). This would be evidence "crackables" are more extensive and irreducibles less extensive than NKS claims.

(2) Formally prove that class 3s or a large portion of them are incapable of supporting universal computation. No "cracks" required. This would be evidence complex systems are commonly irreducible without being fully universal.

(3) Prove some broad class e.g. 3s are formally irreducible, for some or for most inputs. This would provide evidence cracks are rare or exceptional.

(4) Prove a class 3 system is universal. The bigger the set the proof will hold for the better, obviously. This would provide evidence in favor of PCE, compared to 2.

Of course, one can also look at empirical instances of complexity and try to test PCE vs. the above for given cases of it. E.g.

(5) show fracture is "programmably complex", or that it is reducible. Or fluid flows.

(6) provide "cracks" of biological complexity that show the apparent formal computational powers of e.g. DNA or regulatory networks are less than they appear, and below a universality threshold (seems unlikely, but imaginable). Or on the other hand, formally prove they are capable of universal computation.

(7) prove certain physical aggregates (averages, temperatures, energies, etc) always follow laws simpler than those needed to support universality. Or mix with a formal class 3 result to show e.g. systems with uncontrolled internal information flow - "well mixed" - may be irreducible but are not universal - and are empirically common. Or, conversely, "program" a cup of coffee (get a well mixed system to perform computations).

(8) establish QC/QIT results that make a discrete deterministic generator for fundamental physics implausible, and that show the computational capabilities of realizable physical systems exceed those of classical universality i.e. show the real universe is fundamentally more computationally sophisticated than PCE claims. Or, show experimentally and formally, combined, that the space of computations they reach is the same as TMs (if perhaps faster).

As with most real hypotheses that stick their neck out enough to claim something about a good science problem in the real world, we would learn plenty either way, from establishing or refuting PCE.

Report this post to a moderator | IP: Logged

Old Post 10-01-2005 11:52 PM
Jason Cawley is offline Click Here to See the Profile for Jason Cawley Click here to Send Jason Cawley a Private Message Edit/Delete Message Reply w/Quote
Randolph Revoredo


Registered: Oct 2005
Posts: 4

Irreducibility and simplicity

Yes. It´s an interesting reflection. Quite detailed. Mostly agree.

Moreover, when I read you quite reasonable thoughts about the falsification of PCE, making distinctions between logic and empirical statements, I started to think about the irreducibility concept you mention.

Let´s focus me a little on that issue.

I agree with those that says that out there in the empirical world the most underlying forces governing everything should be simple rules and maybe (that´s one reason why NKS looks so interesting to me) one simple, short rule "two or three lines of code in Mathematica language" as Stephen said once. But that is only a personal attitude, an individual choice -my choice- that somehow guide my research.

But I´m clear that my choice may be wrong or, as I will explain in short, not updated (i.e.obsolete) because science formal language also evolves and comprises more and more complex operations in simple expressions.

The question is that I started to realize that "simple" isn´t a static notion. It´s depends on the kind of language and how complex it is to denote a behaviour or a command. As I said before the languages we use in science (mostly rooted in mathematics) advance and creates new symbols for new sets of commands or operations.

In this context, I'm confused when I asked myself: "what is a simple, short, crystal clear statement?"

If I write down the Einstenian equation of energy as E=mc2, it´s simple and beautiful indeed; but if I write it in the mathematical language of Newtonian physics, surely is quite more complex expression (including a bizarre one)

In terms of MKS, I understand that Mathematica is the language used to write science concerned with PCE and the RU, and that's good; but what about if the language proves itself as insufficient to denote the RU, for example?

That is one question.

Another is more philosophical and personal (not targeting NKS): is simple a line of code written with a language who comprises twelve or more levels of concepts or operations in just one symbol?

Simplicity depends, then, on the language one use to represent it?

Is anything truly simple when the language used is complex? What is, in addition, a simple language?

As you see, with this 2nd question I´m just refuting anything I appreciate and esteem a lot: simplicity.

And, if I´m right, is an irreducibility problem.

Last edited by Randolph Revoredo on 10-06-2005 at 06:25 PM

Report this post to a moderator | IP: Logged

Old Post 10-05-2005 06:41 PM
Randolph Revoredo is offline Click Here to See the Profile for Randolph Revoredo Click here to Send Randolph Revoredo a Private Message Click Here to Email Randolph Revoredo Edit/Delete Message Reply w/Quote
Randolph Revoredo


Registered: Oct 2005
Posts: 4

Oh, the language thing!

It is clear to me that a new revolution needs a new language.

The question is: How may I know that the language I´m using is enough or well fitted to pursuit my aim?

How do NKS team solve this, apropos NKS goals?


Randolph.
ITB

Report this post to a moderator | IP: Logged

Old Post 11-05-2006 07:35 PM
Randolph Revoredo is offline Click Here to See the Profile for Randolph Revoredo Click here to Send Randolph Revoredo a Private Message Click Here to Email Randolph Revoredo Edit/Delete Message Reply w/Quote
Post New Thread    Post A Reply
  Last Thread   Next Thread
Show Printable Version | Email this Page | Subscribe to this Thread


 

wolframscience.com  |  wolfram atlas  |  NKS online  |  web resources  |  contact us

Forum Sponsored by Wolfram Research

© 2004-14 Wolfram Research, Inc. | Powered by vBulletin 2.3.0 © 2000-2002 Jelsoft Enterprises, Ltd. | Disclaimer | Archives