Show all 10 posts from this thread on one page
A New Kind of Science: The NKS Forum (http://forum.wolframscience.com/index.php)
- NKS Way of Thinking (http://forum.wolframscience.com/forumdisplay.php?forumid=5)
-- animism (http://forum.wolframscience.com/showthread.php?threadid=309)
the new kind of science based on simple programs reduces everything to some variation on the 256 elementary cellular automata. everything is an automaton. but it observes that many automatons transcend triviality. many automatons are said to be complex. and these automatons, whatever form we find them in, are not only computationally equivalent to each other, but also computationally equivalent to what were traditionally called conscious, yearning, intelligent, living beings with souls.
this reminds me of the complementary classic and baroque ways of seeing things in western art. the tradition of classicism sees things through deterministic line and plane, closed form and individually articulated organization. baroque tradition begins by a similar reduction of things to pictures but linear clarity breaks up into painterly mesh, planar composition banks into vertiginous recession, discrete figures merge into pulsating swarms that merge into their material backgrounds and become fragments of something transcending the picture.
the new kind of science based on simple programs like western art seems to have room for viewing an unexpectedly wide range of things both as mindless machines and living creatures with minds of their own. this seems to rehabilitate the tradition of animism and place it alongside mechanism as an acceptably scientific way of seeing things.
these thoughts occur to me as i draw on both science and art to design user interfaces for software systems. the first task of user interface design is to abstract a model of user behavior and then embody it in metaphors that can be visualized on a computer screen. but once a system transcends the trivial through a sufficiently complex user interface, it seems to deserve more than mere subservience to its users. in fact following the relentless arbitrainess of user preferences seems to deaden a system rather than make it come more alive. the challenge seems to be making room for the system to live a life of its own that competes and co-evolves with its users. i think the new kind of science based on simple programs encourages taking up this challenge. and i hope i am not too far off in this estimation. am i?
Originally posted by william ford
the new kind of science based on simple programs reduces everything to some variation on the 256 elementary cellular automata.
Originally posted by william ford
in fact following the relentless arbitrainess of user preferences seems to deaden a system rather than make it come more alive. the challenge seems to be making room for the system to live a life of its own that competes and co-evolves with its users. i think the new kind of science based on simple programs encourages taking up this challenge. and i hope i am not too far off in this estimation. am i?
interface design is confronted with ever accelerating demands for more and different features. continuously piling on these features may be good for users but not for the interface. so the interface and the user are in competition. they have different goals. and the designer is not just an advocate of the user but more importantly of the co-evolving whole. consider lovelock's gaia advocacy versus an anthropocentric approach.
an interface design might be thought of as a kind of flock of features in constant reconfiguration. and the reason for enabling constant reconfiguration is not that users know best. quite the opposite, users are relentlessly arbitrary and enabling dynamic reconfiguration allows the interface to survive their assault. users might be thought of like kids throwing rocks at a flock of birds and user preferences as a mechanism to avoid taking hits while maintaining integrity of the flock. the idea of computational equivalence seems to support this point of view. like the tradition of animism it imagines the interface to have a mind of its own that is worthy of consideration and accomodation.
I think game theory supports the view that you've taken. I don't see how NKS specifically contributes to this above and beyond game theory.
Are you familiar with game theory, or universal darwinism and the like?
Dennett gave us a typology of imputed system types, based on strategies of prediction. He thinks we treat some systems as material when we succeed in predicting their behavior with rules that are simple enough - I would specify, at the level of the resulting behavior as well as the level of the rule or formula followed. With some other systems we employ a different prediction strategy. We think we see what they are for, and predict their behavior by examining a reduced set of goals somehow simpler than the behaviors themselves. Call them designed or purposeful systems.
He has us imputing intelligences or wills when we find a third prediction strategy works. This involves imputing an internal state - a thought or a desire - as well as a goal. We then analyze changes in behaviors as changes in internal states, altering goals, which organize resulting behavior. We don't try to understand what happens on the floor of the New York Stock Exchange with a mechanical model. To Dennett, all three are meant to be successful prediction strategies, though they might fail around the edges.
In ancient times, wills or something like them were imputed to explain essentially all change, sometimes including any motion whatsoever. The closest thing to a definition of "soul" for some of the ancients was "cause of motion". Modern science in its early stages wanted to replace all such explanations with efficient causes, in turn essentially all reduced to forms of contact. This amounted to an attempt to understand matter entirely according to inertia and impenetrability. Everything was a push - a somewhat naive view because it could not explain how a push works. Then we got forces, which in their origins (production of fields we would say today) are idealized purely formal causes of motion. We don't ask how they arise, we only say how, the shape and variety in which we see they do.
One does not need to posit different kinds of entities to see that different prediction schemes will work or fail with different kinds of systems. And sometimes they will all fail. If they all fail and we don't see any meaning in the outcome we are inclined to say it is random or it seems random. Sometimes we say the same about systems with the third sort of predictability, because we can't determine internal states or can't predict their changes. We are more likely to call those free than random. Or we might call something free when we can see afterwards some reason why it behaved a certain way, but couldn't tell beforehand that it would. A less loaded thing to say is that the system was unpredictable, at least to us.
The meaning we give to notions like "mindless machine" or "mind of its own" sometimes oscillates. Unpredictable twitching we are inclined to call random but also to see as mindless, compared to purposeful action that seems simpler - with the goal directed sort of predictability. We are expecting a certain sort of voluntary control, and absence of purpose suggests mindlessness. But let an ant wander around in unpredictable ways, and we suspect it has a mind of its own. Predictable behavior can be taken as determined and unconscious, or as purposeful and conscious, depending on the context and our other intuitions about the capabilities of the system in question.
Predictability or its absence, simplicity or its absence, in contrast seem to be steadier concepts. Predictability may be disputed as a theoretical matter, but in practice we can typically resolve differences over how predictable an outcome is, at least within broad limits. (E.g. If anyone thinks the outcome of an election is certain, I can offer odds much better than that, then lay them off elsewhere). Nobody seriously thinks a checkboard pattern is complicated - nobody will bet against my ability to predict the color of far off squares on one. I doubt anyone would be willing to give odds that their favorite pattern detecting algorithm can predict various steps in rule 30 many times in a row, without calculating every step in advance.
What NKS deals with is cases where there is simplicity at the level of the underlying rule, and is or isn't at the level of the resulting behavior. And clearly, you get either type at the level of resulting behavior, without needing any serious change in the degree of simplicity at the level of the underlying rule. Science is all about finding simple explanations for things. And NKS shows that there can be simple explanations - simple at the level of a generating rule - even for things whose behavior does not look simple. One might have thought that if the behavior seems to change, the generating rule must have changed. But this need not be the case.
Iterated use of the same rule is one type of order or regularity that can happen. It is a regularity that is compatible with a wide range of outcome behaviors - simple ones, well structured ones, disordered ones, complicated ones that depend on internal details. Which does not mean that everything the looks complicated is the outcome of repeated application of one simple rule. A simple rule is one possible cause of such a "signal". When your business is looking for simplifications, that is important to know. It tells you one place to look, one possibility that must be checked.
thanks for your thoughts but it all comes down to the computational equivalence for me. that seems to be the big idea. and cellular automata are the shortest route to it.
lynn margulis takes on darwinists for being zoocentric. and she demonstrates what a richer view of life one gets from giving bacteria their due. computational equivalence seems to crank up this cosmopolitanism even further to encompass all non-trivial systems.
As for what it may mean for designs of interfaces, I think the main thing is to understand the typical expectations human users will bring to particular areas or issues, and to work with rather than against those expectations. Handling file icons works by acting according to our expectations for the predictable behavior of physical objects. We don't expect them to wriggle around or to talk to us as we handle them. We expect them to do what we do to them.
The idea of launching an application program fits our intuitions about purposeful behaviors. We do not expect to understand internally how every program is functioning and do not wish to see their gut executions most of the time (unless we are studying them or repairing them, etc). We need only the ability to predict what they will do at a macro level, and for them to do those things reliably. If they fail, they must fail in obvious ways and call for our intervention. In effect, they delegate change in purpose decisions to us. We expect them to do their one thing. We don't need to know the details, just "mission accomplished" or "oops, that didn't work".
Systems meant to operate continually without our input and to interact with us, on the other hand, need to conform to our expectations about intelligent behavior - however minimally. They need to be polite, in effect. Most of the time that means being unobstrusive, and when performing actions doing so reliably, like the previous case. Where they need to change their internal state significantly, a change in what they are trying to do, a change in goal or desire - they should interact with us, and inform us of the change, enough so that we can maintain our basic ability to predict their subsequent actions.
If these categories are mixed, humans get frustrated with their machines, for not acting as they expect, or fitting into the category of prediction the human is implicitly using. If the objects on your desktop meandered around you would find it annoying. When a browser meant simply to do one thing exactly as it is told ("go here") instead beeps and pops up and asks impertinent questions and interrupts, humans will want to pull it out by the roots. If an operating system sits there like a rock after encountering an error, we say it is not "bulletproof".
It is fine for higher level interface functions to surprise us, to be unpredictable in a different sense - good even. But within the forms of politeness that we have developed for dealing with each other. We use those in interaction with other intelligent agents that change their way of behaving when they change unseen internal states, because they allow mutual adjustment of expectations, without trodding on the other agent's planning more than is necessary.
It does not matter whether these distinctions are real in some theoretical sense. They matter simply because we make them, we organize our expectations in such ways, and thus work more naturally with systems that fit those expectations than we do with systems that do not.
astonishment without grief
if user interface design depends on binding to user expectations as you suggest then we might ask what the new kind of science based on simple programs tells us about expectations.
the new kind of science based on simple programs says that at each moment i check my neighbors and then reference a rule of thumb i have somehow come to use to turn this information into my next step. expectation would seem to be anticipation of my next step based on some internal representation i carry of my rules of thumb. whether i am a bug or a puddle or a person doesn't matter. and how my rules of thumb came to be doesn't matter. what does matter is whether the simple programs that drive my rules of thumb are trivial or complex. if they are trivial programs like class one cellular automata then binding to them puts a very low ceiling on user interface design. alternatively if user interface design weans a user from trivial rules of thumb and trades them up to complex simple programs then the user might be able to experience astonishment without grief. the reason for this might be that the user's internal representation of transformation rules might still recognize the familiar process of neighbor checking and be able to match that comfortably with more expansive rules.
this thread on user interfaces and animism doesn't seem to be generating any preferential attachment so i'll close with a few thoughts on just that.
cellular automata are a user interface. the 256 elementary cellular automata are a visualization of universal rules of transformation which can be used as a toolkit for assembling, monitoring and reconfiguring pictures of any particular transformations. the new kind of science based on simple programs seems to be all about rebinding science from its familiar continuous equations interface to the astonishingly simple alternative exemplified by cellular automata. it is not unlike the trade up from command line interfaces to windows.
how to trade people up from continuous equations to cellular automata as their common user interface makes us ask what the new kind of science based on simple programs tells us about the way things change. it seems to say that things respond to other things locally and then persist in doing so. without any intervention by outside forces this can have three results - quick termination, endless repetition (nested or un-nested) or infinite variation encompassing non-periodic local structures. following this model, users of various interfaces, or sciences, check neighbors, make adjustments and persist in doing the same over time. networks of users form. some terminate, some keep going but don't go anywhere and others keep linking into more and different things with various forms of organization appearing and disappearing along the way.
barabasi has observed that many networks from yeasts to brains to websites grow by preferential attachment, a tendency of things to link to other things that already have lots of links. this manifests itself in hubs and power curves. the new kind of science based on simple programs shows that hubs can form without design, without any external intervention like natural selection, without any intrinsic value. but once they do form they have survival value. for instance flocking helps individuals avoid starvation, exhaustion, predation and parasites. a positive feedback loop on these advantages may be what has come to be thought of as preferential attachment. this is only part of the picture and a secondary part at that.
preferential attachment might be driven primarily by random chance and secondarily by the survival value of hubs or flocks that is independent to and perhaps even contradictory to the value of the thing being attached to. people bind to familiar things like superstitions, unreliable scientific paradigms, lousy user interfaces, sometimes even against their better judgment. but as the saying goes, familiarity also breeds contempt. one eventually realizes if a preferred attachment makes sense on its own merits or only because everybody else is doing it.
I really like Darwin's theories. And for those who'd like to know how Useful Is Universal Darwinism as a Framework to Study Competition and Industrial Evolution check this: http://ideas.repec.org/p/esi/evopap/2005-02.html
We must as second best...take the least of the evils.
Marine Engineers Beneficial Association
Show all 10 posts from this thread on one page
Powered by: vBulletin Version 2.3.0
Copyright © Jelsoft Enterprises Limited 2000 - 2002.