Show all 6 posts from this thread on one page |

**A New Kind of Science: The NKS Forum** (http://forum.wolframscience.com/index.php)

- **Pure NKS** (http://forum.wolframscience.com/forumdisplay.php?forumid=3)

-- **Cellular automata based on groups** (http://forum.wolframscience.com/showthread.php?threadid=145)

** Cellular automata based on groups**

I was recently reminded of a type of cellular automaton that I think might allow some interesting analysis.

On page 886 of the NKS book, I discuss cellular automata in which the colors of cells are viewed as being elements of a finite group, and the new color of each cell is given by a[t, i] = f[a[t-1, i-1], a[t-1, i]] where f is the group multiplication operation.

With multiplication table m, one gets after t steps:

NestList[Extract[m, Partition[#, 2, 1, {-1, 1}, 1]] &, init, t]

Here element 1 is taken to be the identity in the group, and at each step one pads on both sides with this.

If the initial conditions consist just of a single element {h}, then the pattern one gets is always nested, and in fact is just Pascal's triangle modulo the order of h. And what's going on is that the system is effectively just restricted to operating in a cyclic subgroup of whatever the original group was.

But what happens if the initial conditions involve two elements {h1, h2}?

If {h1, h2} generate an Abelian subgroup, then the system will correspond to an additive rule, and one will again get a nested pattern.

To get something non-Abelian, the whole group must be non-Abelian.

The smallest non-Abelian group is S3---the symmetric group on 3 elements.

One can take the elements to be Permutations[Range[3]], or

{{1,2,3},{1,3,2},{2,1,3},{2,3,1},{3,1,2},{3,2,1}}.

Then the multiplication table for the group is

Outer[Position[elems, #1[[#2]]][[1, 1]] &, elems, elems, 1]

or in this case

{{1,2,3,4,5,6},{2,1,5,6,3,4},{3,4,1,2,6,5},{4,3,6,5,1,2},{5,6,2,1,4,3},{6,5,4,3,2,1}}

Initial conditions like {1,5} or {2,3} then generate Abelian subgroups, and give nested patterns.

But some initial conditions do not generate Abelian subgroups. In the book, I show what happens with {5, 6}.

It's the same story, for example, with {2, 4}.

One gets a complicated and seemingly random pattern. (See the attachment.)

But the question is: is there computational reducibility in this pattern? Can one use ideas from group theory or anywhere else to shortcut the computation of a given element?

I don't see how to do it. Though there is one tantalizing clue.

If one ignores (or grays out) all elements except 2, 3 and 6, then one gets just the pattern in greens, yellows and blues. And this is a pattern whose outline is just a simple nested rule 60 pattern.

But still its innards are complicated. Can anyone see a general approach that allows one to work out even these?

By the way, here's a slightly general evolution function, relevant for permutation groups:

PGCAEvolveList[perms_,init_,t_]:=

With[{m=Function[g,Outer[Position[g,#1[[#2]]][[1,1]]&,g,g,1]][perms]},

NestList[Extract[m,Partition[#,2,1,{-1,1},1]]&,init,t]]

SymmetricPermutations[n_]:=Permutations[Range[n]]

AlternatingPermutations[n_]:=

Select[Permutations[Range[n]],Signature[#]==1&]

** nonabelian CAs, computational power**

In a 1996 paper in Physica D, I showed that this kind of CA (based on a group, like S_3 which is solvable) can be predicted quickly, even though it's pattern looks more complicated than the usual linear CAs. You can get my paper from http://www.santafe.edu/~moore/pubs/semi.html .

First of all, I should mention that when I say I can predict the S_3 CA quickly, I mean in a parallel model (the class NC^2). I don't know how to do it on a serial computer more quickly than by explicit simulation.

Let's say we want to predict the state of a cell at a particular position and time, and we are given enough initial conditions to do this. Let's call the 6 elements 0, 1, 2 and -0, -1, and -2. 0 is the identity, 1 and 2 are the two rotations of order 3, and -0, -1 and -2 are the three reflections. Let's do the labeling in such a way that -x = (-1).x for x = 0, 1, or 2.

Then the idea of the fast parallel algorithm is

1) first, ignore everything but the parity (in this notation, the sign) of the permutation corresponding to the CA state. The sign acts like a linear CA mod 2, so we can quickly compute it, in parallel, for every cell in the space-time diagram in the light-cone above the cell we want to predict. (Obviously this needs a lot of processors, but the NC model allows a polynomial number.) This gives us a "coarse-grained" prediction of the CA --- what I'm doing here is looking at the quotient group S_3 / Z_3. (This is the "rule 60" behavior that Stephen noticed.)

2) now, note that the remainder of the CA state is given by an operation in Z_3, where the coefficients depend on the signs we calculated in (1). In our notation above, the sign of the element on the right determines the coefficient of the element on the left (this is a semidirect product). For instance,

x . y = x + y

but

x . (-1)y = (-1) 2x + y

because x(-1) = (-1)2x. The other 2 cases are analogous.

So, we have a Z_3 CA whose coefficients vary in space and time, and are given by the signs we calculated in (1). The complete state of our desired cell can then be calculated by summing over all paths from the initial conditions to it, and using a divide-and-conquer approach we can do this (again, on a massively parallel computer) with O(log^2 t) operations. The details are in my paper.

The upshot of all this is that predicting the CA for S_3 (and any solvable group) is in the class NC (specifically NC^2, and this can be tightened somewhat) which is believed to be a proper subclass of P. Therefore, unless NC=P this is not P-complete.

I think that P-completeness is a reasonable notion of "computational universality" for finite time (e.g. the sandpile model in d >= 3 and so on). This is certainly open for discussion; note, for instance, that it is logically possible that predicting rule 110 for finite time is in NC even though it's arbitrary-time behavior is undecidable. This could happen if rule 110 is an exponentially slow simulation of a Turing machine (like Minsky's 2-counter machines).

From a computer science point of view, rather than viewing a given system as "computationally universal", it might make more sense to decide what questions we want to ask about it, and then try to classify their computational complexity. For a really hard system we might see something like:

1. predicting for finite time is P-complete

2. telling whether a state has a predecessor (not a garden of eden) is NP-complete

3. telling whether a finite-space system is periodic, or whether it will reach a given state, is PSPACE-complete

4. telling whether an infinite system (with finitely specified initial conditions) will reach a given state is undecidable.

For 110 to my knowledge we only have 4. For Life, lattice gases, and other systems where we can build circuits with reusable components, we have 1, 2, 3 and 4. For sandpiles, where each component of the circuit can only be used once, we have 1 but not 3 (3 is in P since the system settles to equilibrium in polynomial time).

I think this is a more "nuanced" approach to the question of computational universality than the "universal/non-universal" distinction. Even the definition of "computational universality" is a little unclear, since we have to decide on an encoding by which inputs are translated into initial conditions, and outputs are extracted from the state.

__________________

Cristopher Moore

Computer Science Department

Physics and Astronomy Department

University of New Mexico

** **

I was looking at the code that Stephen posted, and then at Cris's paper. Below is some code which implements part of Cris's idea, although I don't quite understand how one gets the speed-up.

The rule 60 outline arises from the homomorphic image Z2={0,1} of S3=SymmetricPermutations[3]. Geometrically, the group S3 can be thought of as the symmetries of an equilateral triangle. A symmetry has homomorphic image 1 in Z2 if it must involve flipping the triangle, and otherwise its image is 0. For instance, a rotation has image 0 in Z2.

The rule 60 outline arises since it is the CA for the group Z2. The elements in SymmetricPermutations[3] which involve flips are {2,3,6}. And so we get exactly the rule 60 outline by coloring those elements red and the others gray from

S3inZ2Graphic[init_, wid_, k_]:=

Graphics[RasterArray[Reverse[PadRight[#, wid] & /@ PGCAEvolveList[SymmetricPermutations[3], init, k]] /. {0 -> White, 1 -> Gray, 2 -> Red, 3 -> Red, 4 -> Gray, 5 -> Gray, 6 -> Red}]]

e.g. Show[S3inZ2Graphic[{2,4},100,70]]

This background is exactly the first step as described in Cris's post, and in his paper.

Because it is from an additive CA, it can be computed quickly. The question of whether a S3 group element at space-time position {x,t} is a flip or not is computationally reducible.

The missing information is which three elements it is can be canonically identified with an element of the integers mod 3, Z3={0,1,2}. Geometrically, one can label the three vertices of the triangle as {0,1,2} arbitrarily. Once this has been done, a symmetry gets the label {0,1,2} according to the vertex where 0 gets mapped.

Algebraically, the map from S3 to Z3 is not a homomorphism, but we can canonically write perm= {rot, flip} where rot is in Z3 and flip is in Z2. Note flip. rot. flip==-rot.

In other words, when the two neighboring elements in the CA are of the form {a,flip} {b,c} one gets {a-b, flip+c}and when they are of the form {a,no flip} {b,c} one gets {a+b, c}.

So the rule for the missing information has 3 colors, and depends on the background information.

Here is an implementation of this three color rule.

Z2TwistedAddition[init_, background_] :=

FoldList[MapThread[Mod[Apply[#2, #1], 3] &,

{Partition[#1, 2, 1, 2], RotateRight[#2] /.

{0 -> Plus, 1 ->Subtract}}, 1] &, init, background]

where init is a list of Z3 elements. The background is a grid of 0's and 1's, with the same width as the Length[init]. The mapping between pairs and elements of S3 is given by

toZ3Z2={0->{0,0},1->{0,0},2->{0,1},3->{1,1},4->{1,0},5->{2,0},6->{2,1}};

FromZ3Z2=Reverse[Rest[toZ3Z2]]

For instance,

Z2TwistedAddition[Join[{0, 1}, Table[0, {98}]],

PadRight[#, 100] & /@ CellularAutomaton[60, {{1}, 0}, 97]]

gives 98 steps using essentially the same initial condition as {2,4} for PGCA.

Given the background, Cris claims it is possible to speed up Z2TwistedAddition, using the capabilities of parallel computation.

I don't understand this part, though maybe it is because Z2TwistedAddition is an additive rule for a fixed background, however it's not translation invariant.

This example is a simple case of a more complicated rule (S3) which is based upon a simpler rule (Z2). In that sense it is part of the story of how simple rules can connect with more complicated ones. As an actual group, it might be misleading because groups correspond to symmetries.

On the other hand, it might be interesting if there is an experimental NKS approach to the extension problem in group theory, namely how to put together simpler groups to get more complicated ones. That is, to investigate the simplest space-time dependent rules

TwistedEvolution[rule , init , background]

and search for those rules which give groups.

** **

After reading Cris's paper more carefully, I now understand the second half better. Here is my understanding of it.

The basic observation to start with is that there is a CA-like evolution from a binary operator which depends on the location of the grid, and that it has the property of additivity.

In this case, the two possible operators are Plus and Subtract Mod 3. It is Subtract in the case that the left cell has a black cell in the given background evolution from rule 60. (The case covered in Cris's paper is more general.)

Because it is additive, any cell influences those in its future light cone in a linear way, which amounts to

hist[[p]] == C[p,q] hist[[q]]

where hist is the grid of states in the evolution, and p and q are space-time coordinates. The coefficient C[p,q] depends on the rule 60 background, and is independent of the Mod 3 values in hist.

The value at spacetime location p is then the sum of the contributions from the initial condition within its past lightcone.

hist[[x,t]]== Mod[Sum[ C[{i,0},{x,t}] hist[[i,0]], {i, x-t, x}],3]

The first use of parallel computing now comes in to perform the addition Mod 3 in Log[2,t] steps. For instance,

NestWhile[Mod[Apply[Plus, Partition[#, 2, 2, 1, 0], {1}], 3] &, list, Length[#] > 1 &]

computes Apply[Plus, list] in Log[2,Length[list]] steps and each step is parallelizable.

The next step is to actually find the coefficients C[{i,0},{x,t}].

By splitting up the evolution of the twisted addition into two time intervals, Cris gets the identity

C[{x1,t1},{x2,t2}] == Sum[ C[{x1,t1},{s,t3}] C[{s,t3},{x2,t2}], {s, x1,x2}]

which allows one to compute coefficients for larger time intervals given coefficients for smaller time intervals.

Again parallel computing comes in. For every, x1, x2, and time t1 in the past light cone, compute the coefficient for one time step C[{x1,t1},{x2,t1+1}]. These run in parallel, taking essentially one step.

Using that information and the identity above, the next stage involve computing for two time steps C[{x1,t1},{x2,t1+2}], again for every coordinate in the region.

That allows one for the next step to compute in parallel for four time steps. And so on.

The number of stages necessary to determine enough coefficients to calculate C[{i,0},{x,t}] is Log[2,t]. The calculation at each stage is basically a sum of at most t terms, and so the number of steps at each stage is at most Log[2,t]. Altogether, the computation takes at most (Log[2,t])^2 steps.

It is interesting to note that the size of the parallel computation can get fairly large, at least from the point of view of trying to do it in series.

From the NKS point of view, it is interesting that we get this interesting behavior within a fixed rule 60 background. That would make me think the S3 automaton is capable of universal computation. Yet the minimal logical form for any cell does not grow very fast. As a linear form it increases in length linearly with time. See the graphics for rule 30 on NKS[p.616] for a typical case which grows more quickly.

It would be interesting to determine more precisely the computational status of the S3 automaton.

** **

Here are some graphics for my previous post. Two show the coefficients C symbolically, with the rule 60 in the background. The second one has been taken modulo 3.

The third graphic (z3coefficients.gif) shows the coefficients C[{1,1},q] where q ranges over all the cells. For each cell, this shows the mod 3 contribution from the first cell in the first row, given the basic rule 60 background.

** CA & SU(3)**

Need help with two things:

(1) Any kind of CA that connects with SU(3) -- the one

associated with the organization of the baryon octet.

(2) Any kind of CA related to G(2); in particular,

patterns of "star of David."

Thanks,

Joel D. Isaacson

Show all 6 posts from this thread on one page |

Powered by: vBulletin Version 2.3.0

Copyright © Jelsoft Enterprises Limited 2000 - 2002.