A New Kind of Science: The NKS Forum > Pure NKS > genetic algorithm fitness function
Author
Philip Ronald Dutton
independent
Columbia, SC

Registered: Feb 2004
Posts: 172

genetic algorithm fitness function

I am having trouble making distinct the following:

x: fitness function of genetic algorithms (GA)
y: simple rules of small programs

I have worked with GA's and find them to be very much like what I would expect a general constraint system to be. I realize that a constraint system may be different than a universal CA, etc. However, when thinking from the perspective of "guessing the formula (shall I say "rules" perhaps?) first with hopes of getting desired behaviour".... well that is where fitness functions and rules seem to develop similarities.

(please pardon the non-user friendly grammar!)

__________________
P h i l i p . R . D u t t o n

Report this post to a moderator | IP: Logged

06-11-2005 12:06 AM
Jason Cawley
Wolfram Science Group
Phoenix, AZ USA

Registered: Aug 2003
Posts: 712

GAs are different from pure constraint systems on the one hand, and pure explicit evolution rules on the other. They are iterative improvement schemes with some extra details added.

A pure constraint system does not specify anything about how a solution that fits the constraints might actually be found. In principle, you could enumerate all the possible solutions and notice which ones exactly work. If the number of possibilities is very high, though, this is not a realistic assumption to make for a natural system. That is, a natural system will rarely have the physical ability to visit literally all its possible states, and stop if and only if it exactly satisfies some complicated constraint.

It can be a reasonable model of a simple enough situation. E.g. a finite state system with only a modest number of states, all connected to one another, and with a single "end" state that returns to itself. Wherever you start, eventually you hit the end state. The transitions can be probabilistic or deterministic. A large sample of runs might have different "decay" times, but eventually they all end up in the attractor state.

There is nothing about a constraint as such that ensures it actually can be satisfied. Or a set of constraints may underspecify the eventual system state, allowing a whole range of possible outcomes.

An explicit evolution rule, on the other hand, tells you the dynamics directly but does not in general tell you what overall behavior will arise from those dynamics. It can be a complicated calculation to find out, and it can vary with the details, of the starting state as well as the explict rule. Some constraints may be deducible from the form of the rule itself. Others, either rigidly true every step or statistically approached on average, may arise from typical dynamics without being easily deducible from the form of the rule, alone (absent such calculations, that is). Examples from ECAs are things like the average density after a long period of time.

An iterative improvement scheme specifies some kind of attractor state or goal, conceptually some function of the whole system state, and a step function that determines actual dynamics. The step function may have some dependence on measured distance from the goal. It is some generalized feedback scheme, that tries to "drag" the system evolution closer to satisfying a constraint, or minimizing deviation of some measure from a goal.

Usual feedback works when the direction the system evolution is nudged by small changes is readily predictable, or in other words expects some linearity between the things being nudged and the measure. When there are lots of things to nudge, many variables, a looser condition is sufficient - a sufficiently smooth convexity in the higher dimensional parameter space all the "nudgables" define. A bunch of nudges, perhaps sometimes in one direction and later in some other, can then be strung together into a path that finds the overall goal.

But if there isn't convex smooth response to small changes this scheme typically fails. It readily gets stuck in local minima. When every nudge goes to a worse spot, you sit still. Even if the overall best spot is a long way away. We say, the dynamics do not allow you to reach the optimum.

There are various ways you can retool an iterative improvement scheme to make it more likely to find an overall optimum. You can add randomness, or you can make progressive changes in the scale of movements allowed. GAs add ways of mixing subelements already found in fundamentally combinatorial, not continual approach, ways.

But they do not exhaustively search the space of combinations. They are thus not assured of finding a global optimum. In many cases, they act largely by effectively injecting a large amount of randomness and changes on multiple scales, to an iterative improvement scheme.

The intuition behind them is that finding "good" "partial answers" to any problem should be a locally preservable thing, and the overall answer should be findable as some combination of good partial answers. For this to work well in practice, though, the fitness measure of the partial answers has to be appreciably better than a typical random fitness measure. And there is no general reason that has to be true, for complicated constraints. A fitness measure might only increase for exactly the right combination, for instance, while being lower for partially correct answers. We'd say, the fitness peak is isolated.

NKS intuition is that direct evolution rules are more plausible as explanations of most forms of complexity seen in the natural world, than elaborate optimizations partially reached. Where optimizing processes occur, they are expected to work well where the thing being optimized is sufficiently simple, but to approach only roughly any overall optima, whenever the things being optimized are complicated.

It is plausible in a wide range of systems that feedbacks occur and move systems toward definite attractors. But it is unlikely to be the right explanation for every case of complexity one sees. And it is not necessary to explain any given instance of complexity. Complexity can arise without anything like it going on, just from the typical algorithmic behavior of simple rules.

I hope this helps. Reasonable question by the way.

Report this post to a moderator | IP: Logged

06-13-2005 07:43 PM

wolframscience.com  |  wolfram atlas  |  NKS online  |  web resources  |  contact us