[Real time computing without stable states] - A New Kind of Science: The NKS Forum
A New Kind of Science: The NKS Forum
Real time computing without stable states(Click here to view the original thread with full colors/images)
Posted by: Jason Cawley
I came across a neural modeling paper coming out of the tradition of machine learning theory recently that I thought might interest some here, since it touches on questions discussed as the class 3 problem, computation in dynamics vs. by attractors, computation without stability, and subjects of general interest in the understanding of intelligence.
It came out a while ago (2001). It is entitled -
Real-Time Computing Without Stable States:
A New Framework for Neural Computation
Based on Perturbations
The team that did it is Wolfgang Maass, Thomas Natschläger, and Henry Markram.
Here is their abstract -
A key challenge for neural modeling is to explain how a continuous stream of multi-modal input from a rapidly changing environment can be processed by stereotypical recurrent circuits of integrate-and-fire neurons in real-time. We propose a new framework for neural computation that provides an alternative to previous approaches based on attractor neural networks. It is shown that the inherent transient dynamics of the high-dimensional dynamical system formed by a neural circuit may serve as a universal source of information about past stimuli, from which readout neurons can extract particular aspects needed for diverse tasks in real-time. Stable internal states are not required for giving a stable output, since transient internal states can be transformed by readout neurons into stable target outputs due to the high dimensionality of the dynamical system. Our approach is based on a rigorous computational model, the liquid state machine, that unlike Turing machines, does not require sequential transitions between discrete internal states. Like the Turing machine paradigm it allows for universal computational power under idealized conditions, but for real-time processing of time-varying input. The resulting new framework for neural computation has novel implications for the interpretation of neural coding, for the design of experiments and data-analysis in neurophysiology, and for neuromorphic engineering.
You can get the whole paper in PDF or postscript, here -
I include a key paragraph to give a sense of what they are after -
"The foundation for our analysis of computations without stable states is a rigorous computational model: the liquid state machine. Two macroscopic properties emerge from our theoretical analysis and computer simulations as necessary and sufficient conditions for powerful real-time computing on perturbations: a separation property, SP, and an approximation property, AP."
Many independent selective and deliberately "lossy" mappings from the underlying complex state effectively extract information without much caring how it "got there". Again from the body of the paper "each readout can learn to define its own notion of equivalence of dynamical states within the system. This most unexpected finding of 'readout-assigned equivalent states of a dynamical system'..." - is what one uses to recover information.
The idea is that "good separation ability" on system trajectories makes stable storage of data through time irrelevant. You don't have to remember input A, just to tell whether the whole system came from somewhere Aish or somewhere Bish. With a high enough dimensionality, lots of those cuts can be made simultaneously from the same underlying system trajectories. A big space of possible combinations among the lossy mapped outcomes (a, b, ab, ac, bc, abc, ...) provides that implicit dimensionality.
"equivalence classes are an inevitable consequence of collapsing
the high dimensional space of liquid states into a single dimension, but what is surprising is that the equivalence classes are meaningful in terms of the task".
My executive summary would be, lossy mappings as such - deliberate and blatant reductions that make useful distinctions, in philosophic terms - have more to do with computation that we might have thought, and need precious little in the way of special set ups or conditions, to operate.
NKS has already shown how little a formal discrete system needs to achieve universal computation. And it turns out there are straightforward ways to get to discrete formality, even if it isn't obvious in the system set up.
I hope this is interesting.
Forum Sponsored by Wolfram Research
© 2004-2013 Wolfram Research, Inc. | Powered by vBulletin 2.3.0 © 2000-2002 Jelsoft Enterprises, Ltd. |
vB Easy Archive Final - Created by Xenon and modified/released by SkuZZy from the Job Openings