wolframscience.com

A New Kind of Science: The NKS Forum : Powered by vBulletin version 2.3.0 A New Kind of Science: The NKS Forum > NKS Way of Thinking > Do Infinite Worlds Believers Buy Lottery Tickets?
  Last Thread   Next Thread
Author
Thread Post New Thread    Post A Reply
Jason Cawley
Wolfram Science Group
Phoenix, AZ USA

Registered: Aug 2003
Posts: 712

Do Infinite Worlds Believers Buy Lottery Tickets?

Some of the philosophic problems with infinite nature positions


Scientific American recently sent a special mailer about many worlds QM and infinite universe theories. While splashy and well laid out, and in some places reasonably interesting speculative stuff, it struck me that it would be more appropriate if it had come from a magazine called Metaphysical American. Which is certain to exist in the alternate world over that-away if you just go far enough. If you let “certain to exist” mean something other than what the words obviously say in ordinary English usage, anyway, as those theories do.

At one point in the reasoning, large-scale uniformity of (suitably coarse) structure of matter (or in this case, the lack thereof) is presented as evidence that the universe is infinite in all directions in ordinary space-time (a hardly novel position, incidentally, that dates at least to Epicurus). Sometimes in science we legitimately employ heroic induction, to extrapolate apparent laws to regions unknown. But in this context, this move struck me as more of a heroic non sequitor.

The universe might be finite but growing forever, twice its visible size and mostly uniform in all directions at sufficient scales. It might have had an intricate nested structure with characteristic scales and detailed patterns, that repeat infinitely in “tiles” across infinite reaches. (Whether that is consistent with observations depends on the scales posited). It might even be clumped into balls that look like island universes in empty space (as Kant thought, a forerunner of what we now know about galaxies), but then have these balls within cells that tile that space on a spatial scale millions of times larger again. As bare abstract possibilities, there is no connection between presence of spatial structure on observable scales, and infinite or finite extension. They are orthogonal questions, all four possibility boxes are occupied. Patterned locally or not, either is compatible with finite size or infinite size.

What is actually happening is a subtle instance of the straw man argument. A weak position for a particular finite universe theory is constructed, (strictly, finite positioning of matter within a universe that might or might not also be finite in space) and (weak, merely suggestive) evidence against it is then extrapolated into evidence against any finite universe theory. If we saw observable matter visibly peter out into rarified emptiness at large distance scales, that would be observational support of an island “universe” – without ruling out the abstract possibility of many separate islands far beyond the observable scale.

Notice that being consistent with hypothetical possibilities beyond the observable scale is not a requirement for any theory. We would say the data are consistent with an island universe. Can we say, because instead we see uniformity, that the data are not consistent with such a universe? No. We can only rule out a characteristic scale below some threshold.

An historical parallel might be of some interest. Aristarchos of Samos hit upon the correct hypothesis that the earth orbits the sun while also rotating, accounting thereby for the apparent motion of the heavens and the seasons. Aristotelians countered that if he were correct, there ought to be parallax in the apparent positions of the fixed stars, over the course of a year. In fact there is such parallax, but the stars are far enough away compared to the base line created by the earth’s orbit, that it is very small, so small it was only successfully measured in the 19th century.

Aristarchos noticed the possibility and countered the argument against him, but with a poor formulation of his case. He said there would be no parallax if the fixed stars were infinitely far away. He didn’t say, “very far”, he said “infinitely far”. Geometrically true, but physically nonsense. The Aristotelians said he was positing an “actual infinity” and dismissed the correct cosmology on the basis of that supposed no-no.

The Aristotelians had rafts of such dos and don’ts, which managed to constrain them to physical positions that were completely wrong. But they had arguments for each one, passed on to them by the prestigious codifier of logic. They easily convinced themselves anybody who differed on any one was just an incompetent reasoner, insufficiently up to speed on the “math” of the day. Scads of much more modern sounding positions were meanwhile found and formulated by looser speculators, sneered at by the cognoscenti – atoms, inertia, objective chance, the solar system, equivalence of earthly and celestial matter, an origin of the visible universe in time etc.

In fact the argument did deserve to be dismissed, but not for that reason. Aristarchos had reacted to a piece of observational data that did not fit his theory, by an extraneous additional hypothesis that taken literally rendered his theory untestable. Instead of boldly predicting there would be parallax or his theory was wrong, he immunized his theory against empirical falsification on a critical point, precisely where his opponents’ case was strongest.

The stronger argument and the truth do not necessarily coincide. Men do not always formulate their arguments in the best way possible, and even when they do, the critical evidence may simply be missing. Making an argument harder to refute in argument is not the same as improving it as a theory. It is harder to know the truth than it is to argue rationally. Rational argument does not of itself produce agreement with others possessing the same evidence, let alone agreement with the truth (a much higher standard – we know precious little). Rational men will disagree on hard questions.

One moral is that speculation cannot be outlawed from rational investigation. But another is that little is decided by shooting down a weak form of an argument contrary to one’s own position. It is the strong form that must be considered. Ideally, we want the strong forms of all the arguments before us as a spectrum of plausible positions.

There is another problem with the inference from uniform observed matter to a supposedly infinite universe, however, that leads to my labeling it a heroic non sequitor. Suppose the infinite universe theory is correct. Then there must be any number of regions in which the distribution of matter differs from the uniformity seen on visible scales, unless some additional positive law supposedly forces uniformity on those scales. In the latter case, observed large-scale uniformity is evidence for that law, not for what sort of universe it occurs within. Leaving that nuance aside, any observed distribution of matter is consistent with an infinite universe in which all physically possible distributions of matter over such scales, occur somewhere. X is not evidence for Y if the truth of Y does not imply the existence of X.

The reason the writer obviously thinks that it is such evidence, is he is invoking an unstated additional minor premise. He thinks his infinite universe would be uniform on large scales “mostly”, with the regions differing from uniformity rare, perhaps measure zero exceptions in a sea of uniformity. (He also implicitly assumes the bit of the universe we can observe is enough to qualify as “large”. In a strictly infinite background, this is arbitrary. It might take something a trillion times larger to qualify as more than miniscule, for whatever the relevant physical laws might be). He is appealing to the principle that what we can observe is not to be taken as special, but as indicative of whatever else we cannot see.

This is ordinarily a sound inductive principle, though not logically necessary (meaning, it can easily be wrong but it is OK as a first guess). But it has peculiar weaknesses in the hands of those stumping for an infinity of possibilities we even in principle cannot see. His overall position, after all, is not only that anything can happen but that everything does. To deduce from this as a supposedly necessary consequence, that (almost) everything we don’t see must look like what we can see, is simply inconsistent reasoning. “Anything we don’t see must still happen”, is a necessary consequence of the position he is arguing for.

It is philosophically possible that everything happens, we can consider it speculatively. What we can’t do is argue that because everything happens, things must happen exactly this way rather than that way. The principle of non-specialness has lost its moorings. The basis of that principle is a pragmatic belief that what we have already seen results from some internal necessity, which we consciously choose to expect to continue, until events teach us otherwise.

I don’t know whether those scientists enamored of such theories recognize the transfer of explanatory responsibilities they are engaged in. When physics decides to tell us that everything happens somewhere, it does not reduce what remains to be explained. It simply exports the explanatory problem. Rather than ask what the universe I live in will do in the next five minutes, one is instead left with the equal puzzle, what universe my consciousness will be experiencing five minutes from now. The answer “all of them” is falsifiable, and false.

A theory may deterministically deduce the branching trajectories of a billion possible universes per nanosecond, but if all it can tell me about which branch I will experience is “who knows?” then it simply is not explaining experience. It is instead explaining the temporal development of a hypothetical construct, with tiny projections onto experience, which makes contact with it on a set of measure zero. It is not explaining overworlds of possibility; it is simply failing to explain experience, leaving that to somebody else.

Note that denying that it is possible to explain any further is equivalent to leaving the explanation to somebody else; it has no further content, as a claim. The epistemic bar anyone else has to clear is unchanged. They need not even take notice of the previous theory. They can either explain experience or they can’t.

Here are two plausible alternate cosmologies, and my claim is that the infinite world types cannot point to a place where they make claims that are both falsifiable and distinguish their theory from either of these. They may make falsifiable claims that coincide with either. They may make claims that differ from either of these, without being falsifiable. But the challenge is to state claims that are consistent with infinite worlds, inconsistent with either of these, and observationally falsifiable.

Alternative one I call objective indeterminism. QM accurately describes the world, and there is nothing more basic underneath it. Indeterminacy in QM formalism reflects actual indeterminacy in the world, metaphysical chance. Some one thing happens, and which it is, is completely uncaused. The material universe is finite in space and to date in time, though it may be open ended in either extension in the future direction – consider that an empirical question and as yet unknown. It might even be objectively undetermined up to now (consistent with the basic outline of this view – i.e. it might depend on an uncaused future event), though I doubt that.

Alternative two I call non-local emergent determinism. There is some underlying non-local deterministic generator beneath QM (perhaps at quantum gravity scales, perhaps smaller still), and QM is an emergent coarse-grained property of that underlying system and its currently observable statistical regularities. The underlying system is objectively non-local in the emergent space-time that it generates - all the action at a distance you might want to get around Bell’s theorems. Things that happen here and now really depend on things that happened there and then, and you can’t isolate them in a small ball around either. The material universe is strictly finite, as are its underlying possible states. It is also irreducibly complex, limiting prior predictability of its details. The trajectories the universe would follow from slightly different underlying states might be as distinct as you please, which might result in limits on observational knowns, but one trajectory is the one the universe is actually on.

Each of these positions is clearly distinct from infinite worlds. They are also clearly distinct from each other. Each predicts we will find different things to be true within physics, beyond what we already know. Neither tells us that anything we could find will be consistent with it, and will only alter in tiny ways where we are to consider ourselves to be in some cosmic probability distribution that realizes everything, somewhere. To my mind, that means both are much more robust philosophically speaking. They are trying to say more about the world.

Scientists must be free to speculate. Speculation is valuable because guessing is essential to finding the truth, which can lie beyond men’s present imagination and beyond what their present evidence may make plausible. But if they wave their hands and say anything can happen and probably will, it is not the most impressive use of that freedom.

Philosophically, it is trying to avoid being wrong, when the point is instead to risk being wrong. Tell us which world you think we will live in, and if you are wrong about it, iterate on the guess. Don’t tell us instead that there are a lot of guesses and some of them are bound to be right, someplace. It is not important to avoid error. Error is good for you, and it is man’s natural state whether he likes it or not. It is important for your theories and your convictions to make contact with the world.

The American pragmatist Charles S. Peirce defined belief as the willingness to stake much upon a proposition. You can tell whether a man is giving you his honest opinion or “just jawing”, by whether he is willing to back his statements with his actions. Note that this is only a test of psychological honesty, not of truth.

Do those attracted to infinite worlds theories buy lottery tickets every chance they get, to enjoy their riches in those universes in which they win? Right, in some worlds they do and in others they do not. But that doesn’t answer the question, it just reformulates it. Which sort of world are we in, one where they’ll believe in Peirce’s sense in anything, or one in which they are covering lacunae within their theories by just jawing?

Report this post to a moderator | IP: Logged

Old Post 12-03-2004 07:48 PM
Jason Cawley is offline Click Here to See the Profile for Jason Cawley Click here to Send Jason Cawley a Private Message Edit/Delete Message Reply w/Quote
Daniel Geisler

Santa Rosa, CA

Registered: Jan 2004
Posts: 16

One way to approach the problem of infinite universes is to ask the more general question of where infinity appears in physics. In this case physicists usually speak of singularities instead of using the term infinity. I believe that the only place that singularities exist in physics is in General Relativity; the Big Bang, black holes, and white holes are singularities that arise in solving Einstein's equations, but we have no experimental reason to believe that white holes actually exist. Even the existence of the first two types of singularities are provisional, they appear in the mathematical solutions to idealized models of physics. No one really believes that the physics of these systems are only described by General Relativity because Quantum Mechanics also needs to be accounted for.

Stephen Hawking has argued that even considerations from quantum mechanics don’t negate the necessity for the Big Bang to be a true singularity. Hawking uses this as the basis for his premise that time began with the Big Bang. Other physicist have argued to the contrary saying that without a theory of quantum gravity that reconciles General Relativity and Quantum Mechanics it is premature to argue that the Big Bang was a singularity. Physicists working with quantum gravity are also guarded about saying that a singularity resides at the center of black holes. A major triumph of superstring theory is its ability to tame the singularities at the center of black holes.

I don’t believe that infinity directly appears in any physical systems although it is important in mathematics and even mathematical physics. Infinity only appears indirectly in physics; for example, it’s acceptable in physics to use infinity in the form of poles to evaluate contour integrals in order to find a function that describes a physical systems, but the poles are always `off to the side` somewhere. So a one dimensional physical system could be described by a complex function with a pole at i; the physical system is constrained to the real numbers while the mathematical function describing it could go to infinity at i.

I think this whole issue lies at the heart of the discrete physics movement. It’s proponents like Wolfram and Smolin maintain that continuity is based on infinity, but that since infinity doesn’t appear in physics, one needs to challenge the idea of continuity appearing in physics. My research in mathematics leads me to believe that ultimately the concepts of discreteness and continuity can’t be separated, just as the concepts of Yin and Yang can’t be separated. Because of this I suspect that fundamental physics can be interpreted in terms of discrete dynamics, but that it can also be interpreted in terms of continuous dynamics.

The only way I can think of to justify the existence of infinity in physics is to have a deeper understanding of it’s utility in mathematical physics. In this case I think it would be better to think in terms of transfinite numbers instead of infinity. Reverse mathematics notes that mathematics is stratified into systems of theorems of equivalent power. It is my understanding that higher orders of infinity are products of more arcane branches of mathematics based on more powerful axioms. So my question is which transfinite numbers are within the realm of mathematics needed to formulate theories like LQG and superstring theory? The greater the number of transfinite numbers, the more I would be concerned that there was actually some type of physical infinity unaccounted for. I know of no reason why infinity would even appear in physical systems based on CAs.

So until I find a reason to drag infinity into the discipline of physics itself, I see no reason to start hypothesizing about infinite unobservable universes.

Report this post to a moderator | IP: Logged

Old Post 12-05-2004 07:25 AM
Daniel Geisler is offline Click Here to See the Profile for Daniel Geisler Click here to Send Daniel Geisler a Private Message Visit Daniel Geisler's homepage! Edit/Delete Message Reply w/Quote
Tony Smith
Meme Media
Melbourne, Australia

Registered: Oct 2003
Posts: 168

Alternative cosmologies

Jason, I increasingly suspect there might be something fundamentally wrong with the prevailing assumption that micro determinism strictly implies macro determinism across the vast range of scales concerned.

This is what I was getting at in my earlier Soup problem post. If what emerges at some level behaves like a liquid then we are unlikely to be able to see it through anything other than statistical glasses. And space itself seems to have a lot in common with an ideal liquid, yet I don't see that as reason to give up looking for simple microscale mechanisms from which such a liquid might emerge.

One downside is that we may never be able to fully simulate the emergence of a liquid state from a deterministic microstructure on any computer which could actually be built in this universe, though we may find useful clues in much smaller simulations.

I've already had enough to say about the absurdities of conventional "understanding" of infinite possibilities somewhere else, but it really only boils down to the, to me, obvious fact that even in an infinite universe we would still only find a measure zero fraction of possibilities actualised, with those that are most likely each being actualised relatively often. The real problem is that the size of regions which might be identical grows as the log of the size of the universe and nobody has yet come up with a convincing story about aleph(-1).

__________________
Tony Smith
Complex Systems Analyst
TransForum developer
Local organiser

Report this post to a moderator | IP: Logged

Old Post 12-05-2004 11:07 AM
Tony Smith is offline Click Here to See the Profile for Tony Smith Click here to Send Tony Smith a Private Message Visit Tony Smith's homepage! Edit/Delete Message Reply w/Quote
Daniel Geisler

Santa Rosa, CA

Registered: Jan 2004
Posts: 16

aleph(-1)

Tony Smith’s comment about aleph(-1) is really important. I spend a lot of time thinking about how to extend tetration and the Ackermann function to the complex numbers. Like the folks who posted about Hyper-operations, I suspect that transfinite numbers will ultimately play an important role in extending the Ackermann functions. I have problems reconciling the fact that set theory uses exponentiation in the power sets that generate the initial transfinite numbers, but that dynamics uses the Riemann sphere to model the maps of the exponential functions. Here are two applications of iterated exponentiation; the first generates a series of transfinite numbers while the second only involves the first transfinite number. Things get weird when you consider the equation 2→x→2 = aleph(0). If k is a natural number including zero, then 2→(x+k)→2 = aleph(k). Since I am planning on spending a significant portion of the next month writing a paper on defining tetration for the complex numbers, a→b→2, I am grappling with the fact that this implied that it doesn’t make sense to restrict k to the whole numbers in 2→x→2 = aleph(0). It seems to me that if the Continuum Hypothesis were true, then I could use that fact to prove that tetration can’t be extended to the real numbers much less the complex numbers. In checking MathWorld’s entry on the Continuum Hypothesis , I have just discovered that there is now an argument that the Continuum Hypothesis is false. So it may be that the expression aleph(-1) does make sense.

Report this post to a moderator | IP: Logged

Old Post 12-05-2004 06:23 PM
Daniel Geisler is offline Click Here to See the Profile for Daniel Geisler Click here to Send Daniel Geisler a Private Message Visit Daniel Geisler's homepage! Edit/Delete Message Reply w/Quote
Jon Awbrey


Registered: Feb 2004
Posts: 557

How Many Infinities Does It Take To Make A World?

Jason,

I'm a little confused here about the extent to which you
mean or don't mean to confound several distinct issues:
(1) simple infinity versus non-denumerable continuity,
(2) whether infinite models of any cardinality and/or
dimension are called for in modeling the so-called
actual universe, and (3) whether models are called
for that invoke alternate universes in some real
not merely conceptually virtual sense of it all.
Could you clarify your intentions a bit on this?

Thanks In Prospect,

Jon Awbrey

Report this post to a moderator | IP: Logged

Old Post 12-06-2004 02:16 PM
Jon Awbrey is offline Click Here to See the Profile for Jon Awbrey Click here to Send Jon Awbrey a Private Message Visit Jon Awbrey's homepage! Edit/Delete Message Reply w/Quote
Jason Cawley
Wolfram Science Group
Phoenix, AZ USA

Registered: Aug 2003
Posts: 712

Some background first - the occasion for my comment was a circular from Scientific American entitled "Parallel Universes". The author is Max Tegmark -

www.hep.upenn.edu/~max/

Note that he himself considers this subject one of his "out there", speculative interests, and has done a lot of other solid cosmology work. He writes on his website, "Every time I've written ten mainstream papers, I allow myself to indulge in writing one wacky one, like my Scientific American article about parallel universes. If you don't mind really crazy ideas, check out my bananas theory of everything..."

Incidentally, useful background for some of these questions with a philosophy level treatment can be found in two books from the early 80s by Paul Davies, "Other Worlds" and "The Edge of Infinity". Davies relies on Wheeler (the originator of many worlds QM) for much of his discussion. They are fun books and quite clear. That is background for those wishing to inform themselves. Now back to Tegmark's article, and people's questions here.

Tegmark distinguishes 4 levels of infinity applied to universes and stumps for all of them - within single universes, ongoing inflation theory leading to multiple ordinary universes separated from each other by recession, many-worlds QM applied to all of the above, and lastly an ensemble of possible QMs motivated by string theory, of which any given many world QM stack of multiverses is merely a single instance.

The basic motivation of so many stacked hyperinfinities of possible universes is to boil everything down to one ensemble. Essentially, everything unspecified is regarded as a point in a probability distribution - which string theory describes "our" physics aka which string theory is true, which branch of QM many world development one is in aka every particular QM determination, which separated bit of multiverse aka what value cosmological parameters have, and lastly where with the resulting supposedly still infinite universe one happens to be, aka the actual value of every possible observable within our light cone and the initial conditions that gave rise to it. Roll a physics, roll a history, roll a cosmology, roll a location.

As the author puts it "complexity increases when we restrict our attention to one particular element in an ensemble, thereby losing symmetry and simplicity that were inherent in the totality of all elements taken together." He doesn't want to ask the color of the center cell of rule 30 at step 218 from initial condition IntegerDigits[787,2,10], because gosh that's so asymmetric and involved. Instead he wants to only have to say what the average grey level of all cells on all steps from all initial conditions from all rules is, because gee that is a nice symmetric 0.5. I am simplifying for the purpose of clarity; read the article or visit his website for a more charitable explanation of his position.

This is not directly a question about universe cardinalities and orders of infinity, but it is related. The universe as rigorously finite (discrete underlying generator, bounded in space), a countable cardinality (e.g. the previous but infinite in the time dimension, forward), are clearly distinct from his position.

One version of Church's thesis holds that the space of computable processes is equivalent to the space of processes physically realizable in our universe. A slightly weaker form would speak of operationally distinguishable processes rather than physically realizable ones. Finite nature is stronger still.

Real processes sensitive to continuum infinite cardinality rather than countable infinite cardinality are posited by some quantum computing (QC) advocates - though there is some room for confusion there. Multiple paths might be demonstrated in QC, which shows a certain "non-classicality", but not continuum infinity cardinality. That is a higher hurdle. Multiway, countable infinite multiway, and continuum infinite multiway, are distinct. Those who buy the full many-worlds QM picture typically expect the last to be possible.

The point is that in QM, we integrate over possible ways an event can happen, and that integral is mathematically over a continuum infinity of paths. But they are also continuous mappings. And we have powerful theorems in analysis that tell us continuous mappings can be covered arbitrarily closely (in suitably defined senses of distance) by countable bases (families of orthogonal functions, with dense but countable coefficients, etc). Continuity is often seen as giving rise to higher infinities, and certainly we need real numbers to deal with continuity. But continuity tames real numbers back again, in the "native" context of analysis. Real numbers are necessary in analysis, highly useful. To me, that is where they belong.

Real numbers take on pathological properties when divorced from continuity, and especially when they appear in information contexts. Imagine for the sake of argument that a countable cover description exists of some state of the universe for a span of time with positive measure (not one point, something with duration). Then one can show the information content of any finite set of whole universe trajectories is less than that of a single real number. In other words, you could encode multiple histories of entire universes in the digits of some single real number, call it "gamma", along with encyclopedic running commentary on everything that happens in any of them.

Now, histories of entire universes with commentary are simply not what I want to mean when I point to a single real number. Purely philosophic point, no math involved. My intentions for the definition and the definition fail to coincide. That is what I mean when I say the standard definition of a real number develops pathological properties in information measure contexts. As Gregory Chaitin has pointed out, real numbers in this sense are almost all unnamable, let alone unknowable. The Church thesis crowd want instead to stick to algorithmic numbers, meaning those that can be arrived at by some specified procedure. A smaller set than the reals.

I note in passing that the various parallel universe bits the article discusses would still be conceivable if all the cardinals involved were countable. So the cardinality question is in principle separable. In terms of motivation, continuous math in QM probability distributions may prompt characteristic views on both questions, but neither is necessary to the other. Some combinations might be excluded unless some systems involved are strictly finite (e.g. a countable number of many-world paths per countable number of multiverses implies real number infinity overall) - but the existence of a given parallel level in Tegmark's scheme and the cardinality of infinities involved are seperable questions, about which theories might differ.

But Tegmark wants infinity even at what he calls level one. That does not involve many-worlds QM branching or an ongoing chaotic inflation multiverse with its separate bubbles receding from each other. It is just the old infinite universe picture of Epicurus. The addition is just to bound distinct pieces of it in light cones, as "observable". As for how he gets its supposedly necessary internal repetition, there he appeals to a finite cover - only so and so many particles, within a bounded region. From this scale below to this scale above, only room for this many arrangements, strictly finite. Therefore, go far enough and that arrangement must repeat. This too is an old idea, though traditionally projected onto the time axis rather than the space axis, as eternal recurrence ("wherever I have found force, there I have found number - for she has more force" - Nietzsche).

There is in fact no necessity in the reasoning, though. It requires the independent assumption of infinite physical extension, which is motivated mostly by the author's inate preference. The supposed evidence is just the relative uniformity of observable matter at large distance scales, with a hand waving "Assuming that this pattern continues...", which is assuming rather a lot, really. Basically, the conclusion. That sort of argument would not pass muster in an undergraduate philosophy paper. He is of course still welcome to the view, which is speculatively interesting. But the reasoning does not connect that view to the supposed evidence (as the quoted phrase above shows, the decisive step is an assumption, not any necessary inference), which is not evidence of anything of the kind. That was the main subject of my original post in the thread.

There is a more basic philosophy point I was making about physical theories. They are unconstrained by philosophy - we have learned that it is a bad idea to lay down prescriptions pretending we already know how nature "has to be". But philosophy can illuminate the range of possibilities compatible with what is known, and the "play" and trade offs remaining between them. As well as clarify core concepts, and notice where they may lead to confusion or to dead ends.

To my mind the right way to state the ensemble point is that present physical theories, motivated largely by attempts to preserve as many symmetries as possible (because those are considered mathematically "elegant"), underspecify the phenomenal world we experience. All sorts of possibilities are fully compatible with the preservation of those symmetries. But preserving symmetries, and reducing to simplicities, are not ends in themselves.

You can create a beautifully symmetric theory of the universe that reduces everything to a marvelous simplicity, with the venerable old metaphysical insight, "all is one". Unfortunately, this theory underspecifies the world. To think is to make distinctions, to break symmetries, to assert something and deny its contrary, and to project the results onto a real external world as a claim about it. Even for Platonic formalists (among whom I count myself).

(Notice, the possibility of encoding entire trajectories of multiverses into individual real numbers, underlies the idea of a single probability distribution from which any given universe is a drawn instance. The position in the distribution is such a number, or vector of numbers. The rest of the theory is a procedure for decoding that number to get the universe it "names").

Last, for Tony Smith, I am also interested in the underlying generator vs. observable coarse grained issue. We may in a given instance only see some lossy mapping of the underlying system. A simple CA version of this is to look at average density (total cell count across a line) through time for CAs and try to imagine deducing the underlying rule from the time series. Since the mapping is lossy, the same coarse grained number in the sequence does not always lead to the same consequent state. The emergent, lossy mapped average looks non-deterministic, even though the underlying rule is clearly deterministic. One can sometimes restore determinism at the coarse grained level by imputing hidden states, or "memory". But the general problem of finding the particular underlying generator is hard.

The issue is not so much that micro determinism does not necessarily lead to macro determinism, but that we can't tell which micro-determinism is leading to a given apparent macro determinism or apparent indeterminism. If we go only by statistical fits, we will find a range of micros that get some of the coarse macro statistics right, and botch details.

There may well be a single exact micro rule-and-initial combination that is exactly correct. But even knowing that (assuming we did) would not tell us how to find it. Some trial and error is always going to be involved. And short of an exact hit, or a vastly simplier but determined macro system (a real reducibility of the underlying system), we will generally settle for something that gets various gross features of the emergent behavior correct.

What the two subjects have in common is the tendency to reach for the ensemble approach, when the dynamics elude easy characterization. The dynamics are deliberately underspecified, to aim at an easier target - some overall average, or the symmetry of an ensemble of possibilities. We have developed mathematics for that, for the tendency to reduce anything involved to a mapping to a number (via averaging e.g.).

Now, that just isn't the only form of simplicity, stability, or structure that systems can exhibit. They can also have the structure arising from repeated applications of a short underlying rule. We need to develop our ability to scan for that, and our intuition that it may be what is going on in cases we have traditionally handled by "underspecific theories".

I hope this is interesting.

Report this post to a moderator | IP: Logged

Old Post 12-06-2004 06:28 PM
Jason Cawley is offline Click Here to See the Profile for Jason Cawley Click here to Send Jason Cawley a Private Message Edit/Delete Message Reply w/Quote
Tony Smith
Meme Media
Melbourne, Australia

Registered: Oct 2003
Posts: 168

Underdetermined microscale rule?

Jason:

The issue is not so much that micro determinism does not necessarily lead to macro determinism ...
Such a counterpoint to prevailing wisdom might still be worth taking on board. Let me suggest how it might happen:

All the discrete models we are considering depend at some level on a notion of neighbourhood. But what if the very definition of a cell's neighbourhood was (very slightly) mutable due to the interactions of emergent structures operating at larger space and time scales?

What we might finish up with is a simple microscale deterministic rule that almost always applies, but which, rather than intrinsic random variation, can vary subtly depending on the effectively limitless emergence of higher order structure. The notionally simple program determining the next state of a cell can then rapidly gain an open ended set of exception conditions--in a kind of feedback between the microscale and emergent domains--maybe just to tell the cell what neighbourhood to apply its rule to at the next tick. (I'm being loose in my use of "cell" here. One of the lessons of my Tick Tock experiment is that it is very easy to get pattern persistence even when basic elements (e.g. Tick Tock's nodes and edges) only exist for a single tick.)

So if we buy the observation that at some close-to-but-not-quite elementary level, space behaves like an ideal liquid, any regularity in the basic elements' neighbourhoods might be marginally disrupted by jostling at that emergent liquid level. If we then wanted to maintain the notion of micro determinism, it could only be done at the price of adding open ended complexity to the underlying "simple programs". However if our real aim is an economical description of what is going on, it would likely be more economical to talk about the interdependence and coevolution of the simple microscale rule and the emergent liquid, rather than to pretend that an open ended rule is a meaningful guarantor of determinism.

If this outline holds, "the general problem of finding the particular underlying generator" becomes a lot worse than "hard", but still does not provide excuse to retreat to untestably infinite horizons.

__________________
Tony Smith
Complex Systems Analyst
TransForum developer
Local organiser

Report this post to a moderator | IP: Logged

Old Post 12-07-2004 02:06 AM
Tony Smith is offline Click Here to See the Profile for Tony Smith Click here to Send Tony Smith a Private Message Visit Tony Smith's homepage! Edit/Delete Message Reply w/Quote
Daniel Geisler

Santa Rosa, CA

Registered: Jan 2004
Posts: 16

The Cosmological Principle

The Tegmark’s infinite worlds rest on two principles. The Cosmological Principle that states that the Universe is homogeneous and isotropic while thermodynamics only allows finite space to contain finite information. Quantum mechanics only impacts the distribution of infinite worlds if they exist; it has no role in producing infinite worlds. It seems that having an issue with infinite worlds indirectly means having an issue with the Cosmological Principle.

Report this post to a moderator | IP: Logged

Old Post 12-12-2004 10:59 PM
Daniel Geisler is offline Click Here to See the Profile for Daniel Geisler Click here to Send Daniel Geisler a Private Message Visit Daniel Geisler's homepage! Edit/Delete Message Reply w/Quote
Jason Cawley
Wolfram Science Group
Phoenix, AZ USA

Registered: Aug 2003
Posts: 712

Finite possible states within a finite volume is not disputed. But Tegmark's parallel universe conclusions do not follow from homogeneous and isotropic alone, not remotely. In fact, it would be closer to the mark to say parallelism follows from the assumption of an infinite universe, and repetition of all patterns follows from that plus the assumption of ergodicity in states.

Homogeneous and isotropic does not entail anything about infinite extent (Tegmark's level 1) at all. It is fully compatible with an entirely finite universe. If you can grant the stated premise but not accept the stated conclusion then the reasoning is said not to follow - in Latin, non sequitor. In Tegmark's own article, he reveals this with the handwaving "assuming this pattern continues" - an assumption.

A single imagined contrary can be constructed that can serve as the straw man foil, denied both by homogeneous&isotropic on the one hand, and infinite on the other - a finite inhomogenous "island" universe, with all matter clumped in a limited region, and space extending beyond that region (how far, unknown). But sharing one contrary is not enough to make two things equivalent. One can just as easily have an infinite inhomogeneous universe, or a homogeneous but finite one. They are orthogonal questions.

It is perhaps worth noticing in passing that "homogeneous" is also a term that is relative to some scale, and only approximately true. The interior of a star - or of a proton - is not identical in matter distribution to a ball of vacuum of equal size in intergalactic void. So homogeneity disappears in the small. In the large limit, it instead becomes tautological - if you average over everything there is, you get the everything average, by the definition of average. The term has meaning on a large but sub-everything scale.

The presently observable distributions of (visible) matter shows filaments and clusters. A much more homogeneous signal is seen for microwave background, with small but measureable deviations from uniformity. We still say this is homogeneous because if you draw a ball large enough, the average measure contained within it will be approximately the same as you move the ball from location to location within the data, with modest fluctuations. Note however, in the case of observable filament-ee matter, if you shrink the size of those balls to galactic cluster scale, homogeneity is no longer present. You are in a void or on a filament, or near one, etc. So it is something meant to apply to a suitably coarse-grained picture.

Finite possibilities within any finite region, combined with an independently assumed infinite extent, is already enough to ensure some patterns repeat - weak parallelism. But the assumed infinity is the critical component of this. In fact, weak parallelism does not need homogeneity at all.

A stronger form, which says not only do some patterns repeat, but any pattern actually does so, requires an additional assumption. But it isn't homogeneity (though homogeneity may make it more plausible). It is ergodicity, that possible states be well connected or well mixed. Otherwise it would be possible to account for endless repeated patterns with e.g. some limited set of attractor states, without everything that can occur on transients to them, being represented somewhere.

This need not violate homogeneity, if said attractor states include homogeneous ones. Other states would have to occur in only limited measure, falling below the coarse-graining threshold. Just as proton here and vacuum there does not violate the assumption of homogeneity at a larger scale, some states might be present only once even in an infinite universe with infinitely repeated patterns as the "usual" "cover", without violating homogeneity.

So, at level one, the actual logical machinery Tegmark needs is (1) assumed infinity (the actual determining step, and not forced or even suggested by evidence) (2) well mixed or ergodic. The first already gives weak parallelism, the second extends it to infinite instances of any pattern that is possible. Notice, the number of instances of any pattern in the second case is the same - countable infinite. The only meaning one can give to "likelihood" or "frequency" in that case, is the average distance between instances. Even then, you need to specify "average", and need the averaging to be over infinite sets. Because by mere chance, some of these infinite copies could be close to each other.

Not QM, but specifically the many worlds interpretation of QM, enters in Tegmark's level 3. He wants another form of infinite parallelism within each level 1 infinite universe (or level 2 multiverse). Each QM "determination" in classical terms is an illusion created by separation of universes. Every possible trajectory from any given state occurs, and has a universe of its own. (Strictly, not quite its own, since nearby possibilities can remain entangled - but effectively as many universes as there are QM events to distinguish them). Tegmark does not want this to be a semantic point, where "a universe in which the electron went that-a-way" is another way of saying "the electron went that-a-way". He wants to distinguish between those two and insists the first is correct.

There is nothing wrong with the man giving us his speculative view of parallel universes. But the reasoning he offers does not connect those views to the evidence he provides - beyond the level of "compatible, possible" anyway. It does not follow from his evidence, but is at each stage driven by free additional assumptions - that the universe is infinite in ordinary physical space at level 1 (and well mixed in terms of trajectories etc), that the many-worlds interpretation of QM is true at level 3, etc.

Those additional assumptions can be considered speculatively, too. But they are not remotely required of anyone who agrees that the universe is homogeneous and isotropic on greater than galactic cluster scales, or that the number of possible states within a finite region is finite. They are quite optional additions.

Report this post to a moderator | IP: Logged

Old Post 12-13-2004 03:27 PM
Jason Cawley is offline Click Here to See the Profile for Jason Cawley Click here to Send Jason Cawley a Private Message Edit/Delete Message Reply w/Quote
Daniel Geisler

Santa Rosa, CA

Registered: Jan 2004
Posts: 16

My point was to reduce the arguments for and against infinite worlds to their simplest form. The Cosmological Principle states that the Universe is homogeneous and isotropic; this implies that the Universe has no boundaries. In other words, there is no location in the Universe where you have stars and galaxies to one side and void on the other. The Cosmological Principle is at odds with the idea of a finite island of stars and galaxies surrounded by an infinite void.

Scientific American had another interesting article a few years ago where the topology of the Universe was considered. The finitude of the Universe is predicated solely on its topology. The topological possibilities for a two dimensional universe are an infinite plane, the surface of a sphere, or the surface of a torus. The topology of the sphere and torus are compact and thus would lead to a finite two-dimensional universe, while the plane would lead to an infinite universe. The number of different topologies possible for our Universe surprised me. Cosmologist are studying the new detailed maps of the cosmic background microwave radiation (CBR) to see if a smaller image of the CBR map can be found in the large map. This would indicate that we are in a compact universe seeing the radiation that has traversed the universe twice.

I have a different take on Tegmark’s article than Jason. Tegmark’s primarily research focus is precision cosmology; determining the criteria cosmological models must satisfy in order to be consistent with our cosmological observations. I see his article as more of a description of the universe than a theory of the universe. A prospective cosmological theory that is fundamentally at odds with Tegmark’s proposed universe is likely to be inconsistent with cosmological observations. Tegmark repeatedly makes the point that his description of infinite universes is not a personal theory, but is an “uncontroversial” tenant held by the cosmological community.

My issue with Tegmark is the nature of the final level of his universe. I suspect that a number of people with the background to appreciate NKS or Godel, Escher, Bach will find his idea that any mathematical structure can serve as a “physical universe” to be odd. Tegmark agrees that a useful criterion for determining if a universe is real is if it contains self-aware entities. I believe that self-aware entities can only exist within the structure of dynamical systems. How could self-awareness exist in the absence of recursive processes?

So, what are the options for a universe? They are a finite universe in finite space, a finite universe in infinite space, and an infinite universe in infinite space. I don’t think we have anyone who wants to make a case for an infinite universe in a finite space because of thermodynamics. This last statement is distinct from Tegmark’s observation that inflation could lead to a finite volume of space inflating to an infinite volume, from our perspective. That is analogous to Klein’s infinite hyperbolic tiling within a finite circle.

A finite universe in finite space would result from the universe having a compact topology as in the surface of a sphere. The universe would appear to be infinite to the casual observer but you could travel in an arbitrary direction and eventually come back close to where you started.

A finite island universe in infinite space is probably the most widely held belief by the average man and the least widely held belief held by cosmologist. This idea is inconsistent with the Cosmological Principle because it requires a boundary, but that doesn’t mean it is impossible.

Infinite universes require what Jason refers to as parallelism, but finite universes don’t rule parallelism out. Even a finite universe may be large enough to contain multiple copies of some portions of itself. But the nature of the parallelism may take on of several forms.

Tegmark states that inflation with ergodic perturbations leads to some Hubble volumes having identical counterparts, but not all. Inflation with quantum perturbations leads to all Hubble volumes having identical counterparts. It seems to me that a small compact universe would not strictly have other copies of itself, but would be indistinguishable from a universe with perfect spatial periodicity. Tegmark also bring up the idea of a universe containing identical Hubble volumes, but at different points in their history. An idea I find interesting is that there could be another Hubble volume identical to our own, but running backwards in time. The entire universe would arise from a chain of statistical accidents. At each moment, almost all of these universes would finally begin to experience time flowing in the same direction that we do and then would become indistinguishable from our own universe. But at each moment a miniscule number of universes would continue on from statistical oddities.

Maybe there is an analogous concept to Plank’s length at very large scales. Just as there is a scale of distance so small that there is nothing smaller worth talking about, there may be a scale so large and encompassing that it is useless to talk about even larger scales.

The main philosophical point I find interesting is that in my first post on this thread I was willing to exclude the infinite from physics. I now remember that this line of reasoning comes from efforts early last century to remove infinity from mathematics by arguing that infinite doesn’t exist in physics. Somehow it doesn’t seem that inappropriate to me for a cosmologist to assume a universe with infinite extent because the concept of an unbounded universe is one of the most time tested scientific observations made by man.

Report this post to a moderator | IP: Logged

Old Post 12-14-2004 06:10 AM
Daniel Geisler is offline Click Here to See the Profile for Daniel Geisler Click here to Send Daniel Geisler a Private Message Visit Daniel Geisler's homepage! Edit/Delete Message Reply w/Quote
Jason Cawley
Wolfram Science Group
Phoenix, AZ USA

Registered: Aug 2003
Posts: 712

Not having a boundary is not the same as being infinite. Thus homogeneous and infinite are strictly orthogonal. While the number of possible topologies for a 2 D manifold in a 3 D space that are finite and without boundary is limited, more become possible as you increase the number of dimesions. (You can also get sheets that are infinite in some dimensions, finite without boundary in others, bounded in others, etc. And time may be one of these dimensions - so e.g. you can have a universe model that is finite in spatial dimension and infinite in time, or infinite in only one time direction).

You say at one point "A prospective cosmological theory that is fundamentally at odds with Tegmark’s proposed universe is likely to be inconsistent with cosmological observations." This is exactly the proposition I vigorously dispute. Multiple proposed universes are possible, fully consistent with observations, that do not include Tegmark's infinite parallel worlds. (Many of course does not equal "every"). His deduction is a product of numerous additional unforced assumptions, which may seem natural or minimal to him, but are in no way required by the data.

You continue "Tegmark repeatedly makes the point that his description of infinite universes is not a personal theory, but is an “uncontroversial” tenant". It is true that he says so. But the statement itself is not true. And even if it were, there would be nothing necessary in it, it would just be similar ideas seeming natural to cosmologists working in similar traditions. A finite homogeneous universe without boundary is in no way inconsistent with anything known. If its volume were small enough, then there would be ways of confirming it observationally. But not seeing those only boosts the required scale, it does not rule out the topology.

Moreover, part of my point is there is no set of observations that Tegmark could point to as ruling out his own theory. It has too much room. Everything possible happens within it an infinite number of times. An observed is a possible. That is why I call it an underspecific theory. Suppose we saw exact duplicate observations in different directions - this could imply a closed topology or a parallel bit of universe "tile" that happened to be nearer than its average expected distance.

As for the statement that a finite island of matter in a boundaryless but flat topology of infinite space, is the most common view of non-cosmologists, I haven't the faintest idea why anyone would say so. I consider it a straw man, in fact. It is about the only thing we can probably rule out from observation alone. (Actually, we can put bounds on it - it is still possible on scales much larger than observable, but there is no reason to consider it likely from anything we can see). Outside of cosmological debate, I've never heard it advocated by anybody. It is not the only, nor the usual, way of envisioning a finite universe - a closed topology is. Twisting space around and topology on manifolds are standard fare for cosmologists.

As for the statement that an infinite universe is one of the most time tested observations made by man, it would be charitable to call it an exaggeration. No one has ever observed infinite anything. 250 years ago, people did not know galaxies existed, and spoke of them as "universes" when they were first posited. 500 years ago, people thought of the solar system in almost the same terms, with different theories about the "fixed stars" tacked on. Yet an infinite universe has been advocated by some philosophers for over 2000 years.

The fair way of putting it would be, the finitude or infinitude of the universe has been a regular philosophical subject for millenia, and every consensus about it ever formed has had to be rethought in light of later discoveries. (Traditionally, the similar but independent question of eternal or having a begining in time has received even more attention).

Report this post to a moderator | IP: Logged

Old Post 12-14-2004 06:19 PM
Jason Cawley is offline Click Here to See the Profile for Jason Cawley Click here to Send Jason Cawley a Private Message Edit/Delete Message Reply w/Quote
Daniel Geisler

Santa Rosa, CA

Registered: Jan 2004
Posts: 16

Is the universe a dodecahedron?

This related story may be of interest at physicsweb.

Report this post to a moderator | IP: Logged

Old Post 12-22-2004 03:28 PM
Daniel Geisler is offline Click Here to See the Profile for Daniel Geisler Click here to Send Daniel Geisler a Private Message Visit Daniel Geisler's homepage! Edit/Delete Message Reply w/Quote
Post New Thread    Post A Reply
  Last Thread   Next Thread
Show Printable Version | Email this Page | Subscribe to this Thread


 

wolframscience.com  |  wolfram atlas  |  NKS online  |  web resources  |  contact us

Forum Sponsored by Wolfram Research

© 2004-14 Wolfram Research, Inc. | Powered by vBulletin 2.3.0 © 2000-2002 Jelsoft Enterprises, Ltd. | Disclaimer | Archives