wolframscience.com

A New Kind of Science: The NKS Forum : Powered by vBulletin version 2.3.0 A New Kind of Science: The NKS Forum > Pure NKS > topology-limits-computability via NKS
  Last Thread   Next Thread
Author
Thread Post New Thread    Post A Reply
Charlie Stromeyer jr.


Registered: Not Yet
Posts: N/A

topology-limits-computability via NKS

Hi, I have previously considered the role of topology in various contexts such as M-theory (string theory physics) in which topology is thought to be emergent or relative. Also, later, I wil try to use an NKS perspective to examine the topology and networks of biological systems already studied by people such as Albert Barabasi. First, though, here is a speculation I had seven years ago about the potential for topology in computability theory which I have not really thought about since but will now begin to dwell upon. Do you know if anyone has thought about these kinds of concepts below from an NKS perspective? Thanks.

In the magazine Science [1] there was a brief mention by B. Citra describing a suggestion from Michael Freedman for approaching a proof that hard problems are, indeed, hard. First, think loosely of the idea of a "limit" as the residue of an infinite process. Then, Freedman wanted to "extend the concept of computation to include infinitely large problems, each one a limit of increasingly large, finite instances of the problem. The trick is to define the extension so that a particular criterion holds: Whenever the original computational problem has a polynomial time algorithm, each infinite example has a solution. The hope, then, is that for problems like the Traveling Salesman Problem [which is NP complete], there will be infinitely large examples without solution." Freedman then wanted to find out how to mathematically define the notion of "limit" for this case.

When this piece in Science first appeared, I tried to think about the "cardinality" and "ordinality" of Freedman's proposed sequence of increasingly large, finite instances of a problem, wondering if topology could be used to help define the convergence and limit of this sequence. At the time, I emailed Freedman these possible ideas (I didn't come up with a concrete realization, but Freedman's PNAS paper [2] at least introduces the concept of "ultrafilter limits" which may be a step forward):

1) Each different computational problem should have a unique infinite example of itself, i.e. each computational problem should have a unique limit. Perhaps this is somehow analogous to the statement that if a sequence of numbers has a limit it is unique, or more generally, that the space X is Hausdorff (e.g. a metric space) if and only if each filter in X has at most one limit.

2) The symbols +infinity & -infinity can be added to the set R of all real numbers which then creates the topological space R' and R' is the set of extended real numbers. Infinity can exist within a topological space and, likewise, one might want both the infinite and finite instances of a computational problem to exist in the same "space". (A sequence which diverges to infinity within a number system (e.g. R) can be mapped into a topological space as a family of points called a net or a Moore-Smith sequence, and this net can converge to infinity).

3) The notion of "convergence" can be extended to the case of generalized sequences where the index moves over a directed set, and the terms are in topological space. The convergence of nets and that of filters can be defined in a topological space X and, conversely, this convergence can define a toplogy of X.

4) A point at infinity, i.e. an ideal element, can be added to a topological space via Alexandrov's theorem, e.g. one point-compactification. This procedure makes infinity real within the topological space as a limit point, a point which must share at least one property in common with the points of the net.

5) Also, see the paper [3] by M. Sipser about a topological view of some problems in complexity theory. I wonder if such speculations about sequences and limits might also be relevant for trying to understand intermediate Turing degrees (from recursion theory).

P.S. Freedman also suggested that the concept of "limits" might be useful for some unsolved problems in geometry. Non-commutative geometry includes concepts of limits and one might look, say, at possibly (non-trivial) classical limits of non classical (e.g. bicovariant) calculi on a topological space.


[1] Science, vol.275, page 1570, March 14, 1997. (This brief piece does not say anything additional to what I have already written in the first paragraph above)

[2] M. Freedman, Proc. Natl. Acad. Sci. USA, Vol. 95, pp. 95-97, January 1998.

[3] Sipser, M. (1985) in "Theory of Algorithms", eds. Lovasz, L. & Szemeredi, E., Colloq. Math. Soc. Janos Bolyai (North-Holland, Amsterdam), Vol. 44, pp. 387-391.

Report this post to a moderator | IP: Logged

Old Post 04-08-2004 08:21 PM
Edit/Delete Message Reply w/Quote
Jon Awbrey


Registered: Feb 2004
Posts: 557

Topological Information Spaces

Charlie,

Will look up some references later --
or you can probably search-engineer
your own well enough -- but there's
a lot of topological thinking in the
Dana Scott, Barwise, Seligman terrain
of thought, going way back to the 60's,
partly due to the topological character
of "information domains" that constitute
complete partial orders, if memory serves,
which it often doesn't.

Jon Awbrey

Report this post to a moderator | IP: Logged

Old Post 04-08-2004 08:56 PM
Jon Awbrey is offline Click Here to See the Profile for Jon Awbrey Click here to Send Jon Awbrey a Private Message Visit Jon Awbrey's homepage! Edit/Delete Message Reply w/Quote
Todd Rowland
Wolfram Research
Maryland

Registered: Oct 2003
Posts: 113

It is an interesting idea to try and define a limit to a computation. I'd like to at some point list some alternative ways of doing so, but that will have to wait until after the NKS 2004 conference, when I will have more time. There are nonobvious approaches using NKS methods.

I am skeptical about the usefulness of putting a topology, in the technical point-set sense, onto the spaces of computations, or some naturally extended space.

Looking at Freedman's paper, his motivation comes from metric spaces, which have had a big impact on group theory and geometry in the last couple decades. But the important thing in the world of computations is not the triangle inequality, or the topological notions of continuity or homotopy. Instead the notions of emulation and universality are more important.

Indeed, when using the mathematical mode of operation from the midtwentieth century, one restricts attention to equivalence classes. In the case of finite computations from traditional recursion theory, one considers the intermediate degrees of computing power. In the case of simple programs from NKS, where typically computations are ongoing, one has the Principle of Computational Equivalence. The space of equivalence classes is not as interesting as the ways they become equivalent.

Also, it is interesting to find out the properties of individual rules. In the limit construction from the toy case in Freedman's paper, a limit is taken of spaces which have the metric property, and the metric function also has a limit. This is a general phenomenon when using ultrafilters to make a limit, e.g., the hyper-real numbers. The logical properties and structures of the spaces carry over into the limit.

I find Freedman's toy cases of searching a finite vector space versus searching a binary tree not very convincing as far as his intentions of uncovering some topological differences between P and NP. However, it is a much closer analogy when talking about the difference between computational reducibility and computational irreducibility.

He is really attacking the more practical question of when a computation, in the NKS sense, can be sped up. It is doubtful whether this condition is equivalent to either an integer dimensional space, or one with infinitesimal translation properties. These are macroscopic affectations of reducibility, but one can imagine others.

For example, one can take the "hyper"-limit of Rule 90. See the attached graphic. For instance, one can tack on real numbers to the model of rule 90, and try to predict the color of the pixel at {x,t}. It is an illuminating excercise to go through the details, and compare the fractal version, which is geometrical, to the logical "hyper"-version .

Here are some brief comments on Charlie's first two points:

1) Typically from point set topology, a sequence will have more than one limit point. Also, the "hyper" construction can depend on the choice of ultrafilter, which is a pretty complicated issue. More concretely, it is not hard to imagine taking the limit of rule 90 in the wrong way.

2) In the setting from Freedman's paper, the limits of computations are no longer computations (in any sense), but instead the computations sit inside a larger space. So in this picture Infinity would not be a computation.

My one other comment for now is that the topology from partial orders, say of Turing Degrees, is a different type of topology from what Charlie and Michael are talking about.

Todd Rowland has attached this image:

Report this post to a moderator | IP: Logged

Old Post 04-13-2004 10:25 PM
Todd Rowland is offline Click Here to See the Profile for Todd Rowland Click here to Send Todd Rowland a Private Message Click Here to Email Todd Rowland Edit/Delete Message Reply w/Quote
Charlie Stromeyer jr.


Registered: Not Yet
Posts: N/A

Thank you for your reply. I have some more comments, questions and ideas to post later, but here are some preliminary remarks and questions for the sake of clarification. Also, it will take me some time to adapt as I try to re-intuit basic mathematical concepts in terms of NKS.

"I am skeptical about the usefulness of putting a topology, in the technical point-set sense, onto the spaces of computations, or some naturally extended space."

Here, I was trying to think by analogy, attempting to imagine what a generalized concept of "limit" might mean for computational problems. Topology, in the sense of point set topology, is concerned with the generalization of concepts of continuity, limits etc. to sets other than the real and complex numbers.

"I find Freedman's toy cases of searching a finite vector space versus searching a binary tree not very convincing as far as his intentions of uncovering some topological differences between P and NP."

There is a field of mathematics called algebraic complexity theory (see e.g. website [1] and its links option), and one activity of this field is the description of algebraic-topological differences between separate complexity classes. In this approach, the usual P/NP distinction is the discrete case - with integer coefficients due to the input polynomials, and a finite alphabet.

"For example, one can take the "hyper"-limit of Rule 90. See the attached graphic. For instance, one can tack on real numbers to the model of rule 90, and try to predict the color of the pixel at {x,t}. It is an illuminating excercise to go through the details, and compare the fractal version, which is geometrical, to the logical "hyper"-version."

How might NKS view concepts such as the hyperarithmetical hierarchy of degrees of recursive unsolvability which is uniquely determined by constructive ordinal numbers (hyperconstructive ordinals are uniqueness ordinals) ?

Also, what about the integer programming theory approach to problems such as knapsack polytopes or traveling salesman polytopes?

"My one other comment for now is that the topology from partial orders, say of Turing Degrees, is a different type of topology from what Charlie and Michael are talking about."

Can you please say more about this so that I can be sure we are thinking of the same concept?


[1] http://math-www.uni-paderborn.de/agpb/research.html

Report this post to a moderator | IP: Logged

Old Post 04-14-2004 02:57 AM
Edit/Delete Message Reply w/Quote
Charlie Stromeyer jr.


Registered: Not Yet
Posts: N/A

In another paper by Freedman [1], he conjectures the "ubiquity of undecidability" which reminds me of ideas ala Kolmogov, Chaitin and Wolfram. His conjecture means that the "number" of provable statements is less than the sqrt of the number of statements, and here the "ubiquity" is posited for the case of a formal system subject to Godel's incompleteness theorem (e.g. Peano arithmetic or ZFC).

I also found within a book on fuzzy topology [2] an example of convergence to an infinite set. The set of fuzzy points to which a fuzzy net converges is, in general, infinite (yet putting a restriction on the supports obtains uniqueness of convergence related to the iterated limits theorem).

I will keep thinking about these issues.


[1] M. Freedman, "Topological Views on Computational Complexity", Proc. of the ICM 1998 Berlin Conference, pp 453-464.

[2]

http://www.wordtrade.com/science/ma...cs/topology.htm

Report this post to a moderator | IP: Logged

Old Post 04-18-2004 12:14 AM
Edit/Delete Message Reply w/Quote
Post New Thread    Post A Reply
  Last Thread   Next Thread
Show Printable Version | Email this Page | Subscribe to this Thread


 

wolframscience.com  |  wolfram atlas  |  NKS online  |  Wolfram|Alpha  |  Wolfram Science Summer School  |  web resources  |  contact us

Forum Sponsored by Wolfram Research

© 2004-14 Wolfram Research, Inc. | Powered by vBulletin 2.3.0 © 2000-2002 Jelsoft Enterprises, Ltd. | Disclaimer | Archives