wolframscience.com

A New Kind of Science: The NKS Forum : Powered by vBulletin version 2.3.0 A New Kind of Science: The NKS Forum > NKS Way of Thinking > fairness in CA's
  Last Thread   Next Thread
Author
Thread Post New Thread    Post A Reply
Philip Ronald Dutton
independent
Columbia, SC

Registered: Feb 2004
Posts: 172

fairness in CA's

If a cell is visited (turned on) then it's "fairness co-efficient" should change (for the better). If the visitation is as often as the other cells then it's coeffient should be quite high. All cells compute fairness equally.

N = number of cells.
CA rules considered: elem. 256 (1D)


Which CA rules provide the individual cells with the best overall fairness?

The answer is obvious when you are allowed to turn on all the cells (at one particular step) somewhere in the lifetime of the course of the CA (which is the case with some of the rules- for example: all cells always turned on). However, what happens when you start restricting the max number of cells allowed to be turned on at a given step of the CA execution? The limiting imposes some left-to-right type of properties due to the nature of 1D CA. However, it would be interesting to see some stats if we say the first X "on" cells from the left of the N celled CA are the max number of on cells allowed.

Example: Which CA rule appears to be "fair" if working with 50 cells and a max of 15 cells.


Perhaps I can someday find the words to recast my question more clearly. Until then I will simply use this alternate explanation:
What rules are best at being fair? What about when the number of "on" cells are limited?

__________________
P h i l i p . R . D u t t o n

Report this post to a moderator | IP: Logged

Old Post 12-11-2006 05:15 AM
Philip Ronald Dutton is offline Click Here to See the Profile for Philip Ronald Dutton Click here to Send Philip Ronald Dutton a Private Message Visit Philip Ronald Dutton's homepage! Edit/Delete Message Reply w/Quote
Alexandre Ismail

NYC

Registered: Nov 2005
Posts: 4

OK
So you would first have to define how you are computing this "fairness" value for a given cell.

Secondly, are you interested in "fairness" at a particular time slice? Or over a history?

With limitations imposed on the number of black cells, then you're essentially asking questions about the distribution of on vs off cells in the evolution of a system, which is all right. However, it seems to me that most other people tend to think of the patterns formed ON the cells, and how these higher levels of organization (patterns, particles, walkers, etc) evolve over history, and not on the distribution of on vs off cells in the underlying cell-space. That said, your interest and proposal may have basic thermodynamic/statistical/mechanical interpretation.

Report this post to a moderator | IP: Logged

Old Post 12-14-2006 07:30 PM
Alexandre Ismail is offline Click Here to See the Profile for Alexandre Ismail Click here to Send Alexandre Ismail a Private Message Click Here to Email Alexandre Ismail Edit/Delete Message Reply w/Quote
Jason Cawley
Wolfram Science Group
Phoenix, AZ USA

Registered: Aug 2003
Posts: 712

The class 2 glider rules are "fairest" in this sense, since they visit each cell occasionally. But they are also very low total density.

Here is some code for evolving these -


accumulate[list_]:= Rest[FoldList[Plus,0,list]]

step[rule_, maxon_, rowabove_] := 
 Take[Last@CellularAutomaton[rule, rowabove, 
    1], (If[Length[#] > 0, First[#], #] &@
      First@(Position[#, x_ /; x >= maxon] /. {} -> 
    {Length[#]}) &@(accumulate@
      Last@CellularAutomaton[rule, rowabove, 1]))]

evolve[rule_, maxon_, start_, t_] := 
 With[{ragged = NestList[step[rule, maxon, #] &, start, t]},
  PadRight[#, Max[Length /@ ragged]] & /@ ragged]

score[data_] := 
Round[100  Length[data[[1]]]*1. (Total /@ Transpose[data])/Total[data, 2]]/ 100.

score2[data_, allowed_] := 
 Round[100  Length[data[[1]]]*1. (Total /@ Transpose[data])/(Length[data]*allowed)]/100.


The first is just a utility function that turns a list into a running total to that point in the list, which is helpful in finding the position of the nth 1.

The step function takes the previous row and calls the CellularAutomaton function for one step. Then it prunes the result by using the Take function, with the amount to take determined where the accumulated list goes over the max number argument.

The evolve function calls step over and over and pads the eventual result out to rectangular, suitable for array plotting and Transpose etc.

The score function counts 1s in each column through time, divides by the total in the whole array. The last bits on the outside are just a way of returning the integer and a couple of digits to the right of the decimal.

The numbers score gives back are thus effectively a "share" relative to a theoretical 1 everywhere if the allowed 1s were distributed perfectly evenly throughout the vertical columns.

You could get a different measure - of potential as well as fairness if you like - by dividing not by the total 1s in the array, but by the max allowed 1s times the number of steps. Then instead of measuring how evenly the 1s visit all the columns, you are doing that but also scaling by how the rule "uses" all the "allowed" 1s or not. Thus the score2 function.

Fine imaginative question. Anything clear enough has an answer.

Report this post to a moderator | IP: Logged

Old Post 12-14-2006 10:46 PM
Jason Cawley is offline Click Here to See the Profile for Jason Cawley Click here to Send Jason Cawley a Private Message Edit/Delete Message Reply w/Quote
Philip Ronald Dutton
independent
Columbia, SC

Registered: Feb 2004
Posts: 172

constraints

Thanks, gentlemen, for the input thus far. Questions like these ultimately worry me for one particular reason. When dealing with CA's, networks, or other generic simultaneous update automata, it gets tricky when constraining them. For example, with CA's (and this is mentioned above) if I constrain them to say that only 15 cells are allowed to turn on then I have to decided which ones to turn on- somehow. That decision is arbitrary and the CA does not make this decision for you. You have to say "start counting from the left" or something like that. I am not sure what you call this particular side-effect (or should I call it a "problem"). Does anyone know the name of this side effect? If I say that a network should not create more than X new edges then how do I figure out which ones to count as being "allowed" since at that given step, all the new edges are really "computed" simultaneously... like simultaneous updates.

This problem is everywhere with updating automata. I once tried to use a CA to discover properties of graphs (graph theory graphs). The problem is that I could not figure out how to map the graph nodes to the CA cells... there are too many ways to do it. So for the scheme to be useful I have to use all combinations (which I do not want to do).

Is this a clearly understood problem? Does simultaneous update automaton model theory address this issue?? Is it clearly stated in the literature???

What does this kind of problem say about the model (all simultaneous update automaton)??

Does NKS (the book) address any of these kinds of side effects when dealing with this model?

__________________
P h i l i p . R . D u t t o n

Report this post to a moderator | IP: Logged

Old Post 12-15-2006 09:59 PM
Philip Ronald Dutton is offline Click Here to See the Profile for Philip Ronald Dutton Click here to Send Philip Ronald Dutton a Private Message Visit Philip Ronald Dutton's homepage! Edit/Delete Message Reply w/Quote
Post New Thread    Post A Reply
  Last Thread   Next Thread
Show Printable Version | Email this Page | Subscribe to this Thread


 

wolframscience.com  |  wolfram atlas  |  NKS online  |  web resources  |  contact us

Forum Sponsored by Wolfram Research

© 2004-14 Wolfram Research, Inc. | Powered by vBulletin 2.3.0 © 2000-2002 Jelsoft Enterprises, Ltd. | Disclaimer | Archives