A New Kind of Science: The NKS Forum > Pure NKS > Markov Machines
Author
Jason Cawley
Wolfram Science Group
Phoenix, AZ USA

Registered: Aug 2003
Posts: 712

Markov Machines

While NKS focuses primarily on deterministic rules, the simple programs approach can be readily extended to stochastic processes. If these are kept simple enough, then NKS methods of enumeration and experimentation can be applied in straightforward ways. A particularly simple kind of stochastic rule system that can serve to illustrate this is the Markov machine or MM.

A Markov machine is a non-deterministic finite state machine specified by a state transition probability matrix or STM. The system has n states. In each state there is a set of probabilities, summing to 1, to move to each of the other possible states. A current state read across the top of a matrix gives the column, the state at the next step gives a row. A single number at each matrix location gives the probability of making that transition. Each step is independent of the last, with constant probabilities within the STM (if the probabilities can vary the system is no longer Markovian).

A typical STM might look like -

{{0, 0.333333, 0}, {0, 0.333333, 0.666667}, {1., 0.333333, 0.333333}}

Note that columns must sum to 1 but rows need not. Also notice that deterministic finite state machines are a proper subset of MMs, exactly those with only 1s in their STMs and no fractional values.

One can enumerate MMs by increasing the fineness of the gradations allowed in the probability matrix or by increasing the number of allowed states. In general the number of MMs with n states and k "choices" at each step is -

Length[Compositions[choices,states]]^states

Note that Compositions is a Mathematica function defined in the Combinatorica package, which returns all the possible combinations of n integers summing to a given integer. Thus -

Compositions[3,3] = {{0, 0, 3}, {0, 1, 2}, {0, 2, 1}, {0, 3, 0},
{1, 0, 2}, {1, 1, 1}, {1, 2, 0}, {2, 0, 1}, {2, 1, 0}, {3, 0, 0}}

One can specify a random MM of a given size by first defining a function that takes an integer rule number and returns a unique state transition matrix -

stm[rn_Integer, chance_Integer, size_Integer] :=
With[{big = Compositions[chance, size]},
Transpose[Part[big, 1 + IntegerDigits[rn, Length[big], size]]/(1.chance)]]

Then by assigning down values to a variable you can specify a particular MM in that class, chosen at random, by -

statetran = stm[Random[Integer, 10^3 ], 3, 3]

That sort of space one can simply explore exhaustively. Note that there are 9 possible 2 state MM with 50-50 or automatic transitions, 16 with 1, 2/3, 1/3, 0 chances - each easily enumerated and explored. With 3 states there are already 216 MMs with 50-50 chances, and there are 1000 with 1/3 chances allowed.

Other sizes are trivial to sample, but too large to completely span. Here we want 4 states and transition probabilities specified with 10% fineness; there are 6.7 billion of those -

statetran = stm[Random[Integer, 286^4 ], 10, 4]

And here we look at the much more extensive space possible with larger values in the number of states (Having checked that the length of Compositions[6,6] is 462) -

statetran = stm[Random[Integer, 462^6 ], 6, 6]

Any of the above specifies a STM uniquely. Then we just need to evolve the system so specified, with a pair of functions. One to evaluate a single step, and another to chain a bunch of them together.

next[current_Integer] := With[{a = Random[Real]},
For[j = 1, j =<  Length[statetran], j++,
If[a < Total[Table[statetran[[i, current]], {i, 1, j}]], Return[j]
]]]

trajectory[start_Integer, steps_Integer] := NestList[next, start, steps]

(I'm sure there is cleaner way to do the step but that suffices).

That is enough to see single trajectories, as lists of states. But obviously one would like to see many examples starting from each of the possible initial states. A single line to do that and sort the results slightly, with each element suitable for plotting as a typical NKS array, is -

evolutions[size_Integer] :=
Table[ Transpose[Sort[Table[trajectory[i, 20], {10}]]], {i, 1, size} ]

A note about reading these plots. Each column down the page is one run of the Markov machine. The color at each step tells you the state of the machine at that step in that run (lighter is a lower state number). The adjacent columns are independent runs. Sorting them has just put runs that happen to be the same for several steps, in order from left to right across the display. As soon as they differ, the higher valued state at that position goes on the right. The additional arrays are still the same Markov machine, using the same state transition matrix. But they have been started off in a different initial state.

You can regenerate any line containing evolutions to sample additional runs, and regenerate any line containing statetran to try a new probability table, aka a different MM.

One can then readily explore issues of attractors and basins, averages for time spent in each state and the evolution of those averages over time, and the complexity of the internal dynamics. The NKS approach would stress looking at the variety in typical dynamics ("transients") rather than simply calculating long run averages.

I hope this is interesting. A sample notebook with the necessary functions is attached.

Attachment: markovmachines5.nb

Report this post to a moderator | IP: Logged

09-29-2004 10:07 PM
Tony Smith
Meme Media
Melbourne, Australia

Registered: Oct 2003
Posts: 167

Not necessarily non-deterministic

More than 7 months ago I posted an animation of the simplest example I have found of a class of reasonably common mechanisms which I now see are equivalent to what Jason has called "non-deterministic Markov Machines", but which are clearly deterministic Class 3.

The "Life in a Tube" rule space in which I made these discoveries is simply Conway's Life applied to a narrow cylindrical universe, its rules each being notionally equivalent to a many coloured 1D CA. Most of the interesting subclasses of these rules involve at least one complete circumference of live cells at the leading edge, these circumference(s) following Rule 22 and thus often Rule 90 at period 4. At the trailing edge of the Rule 22 area there is interference from debris which forms more debris and it is in the succession of that debris that we find rich examples of all four of Wolfram's classes.

Differences between the randomising efficiency of Rule 22 and Rule 90 are illustrated in the NKS book(p. 265). However widening of the Life in a Tube active zone between its leading edge moving at c and often more slowly stabilising interference at its trailing edge can turn even a pure Rule 90 zone into an effective source of randomly timed bit arrivals.

Debris stabilisation generally involves a small number of distinct mechanisms which follow each other via a Makhov process. I have been calling these processes "metabolisms" and representing them as directed graphs. The animation linked at the top can be represented by five distinct stabilising (or temporarily stabilised) edge states, each stabilising (or persisting) across 16 generations. The combination of the current state and the incoming active zone bit determines which if any of the two fully stabilised outcomes is generated and the successor edge state.

As mentioned in the linked post, the wash up is that 500 cycles of the Markov Machine (8000 generations of Life in a Tube) generates 158 apparently random bits. Running a short script that emulates the Markov process for 5000 cycles generates 1668 bits, seemingly converging towards a c/3 speed which is consistent with the role of effectively random sequences of incoming bits in determining whether or not to generate an output bit. Two temporarily stabilised states persist for as long as the incoming bits are zero.

__________________
Tony Smith
Complex Systems Analyst
TransForum developer
Local organiser

Report this post to a moderator | IP: Logged

09-30-2004 06:37 AM
Jason Cawley
Wolfram Science Group
Phoenix, AZ USA

Registered: Aug 2003
Posts: 712

A minor improvement to the previous is possible, as follows.
Define -

displayMM[n_Integer] :=  With[{
e = evolutions[n]
GrayLevel /@ Range[0, 1, 1/(n - 1)]]},
Show[GraphicsArray[
{Table[ListDensityPlot[
Reverse@Part[e, j]
AspectRatio -> Automatic,
ColorFunction -> (# /. colorrules &),
ColorFunctionScaling -> False,
FrameTicks -> False,
DisplayFunction -> Identity]
{j, 1, n}]}], ImageSize -> 72*8] ]

Put that in the data generation and display function section of the notebook and make it an initialization cell.

Then call it with argument set to the state size of the MM you want, after setting statetran downvalues as before. So your function calls in the working section reduce to just -

displayMM[3]

Thanks to Richard Phillips for help on these improvements. A few other minor corrections are also made in the revised notebook, attached.

Attachment: markovmachines5.nb

Report this post to a moderator | IP: Logged

10-01-2004 08:29 PM
Jason Cawley
Wolfram Science Group
Phoenix, AZ USA

Registered: Aug 2003
Posts: 712

As for what they do, the 2-state, 2-choice MMs are the easiest to analyse. There are 9 of them. MMs 0 and 8 go immediately to fixed points and stay there, differing only in which value is the fixed point. MMs 3 and 7 remain in their "attractor" state if they begin there, and fall into it otherwise after a limited number of steps, half at each step on average. So in 10 sample evolutions, typically one sees a single run lasting 4 steps but none longer. MM 2 alterates. MM 6 remains in whatever state it starts in. MM 4 randomizes evenly; after 4-5 steps, size 10 populations from either initial condition are indistinguishable, with 4-6 out of 10 in either state at a typical step. MMs 1 and 5 give the most interesting behavior possible for this simple class. Each has a "repelling" state that never lasts more than one step, and another that flips or stays randomly. As a result they alternate with unpredictable frequency, spending more time in a favored state by "lingering" there.

Sticking with 2 state MMs, there are 16 with 1/3 chances, 25 with 1/4 chances, 36 with 1/5 chances, and 49 with 1/6 chances. These can show marginally greater variety in the mixed chance cases, with slightless less symmetry begining with the 1/3 case. Even symmetric ones give slightly more interesting behavior, as e.g. when each state has a 2/3rd chance to remain and a 1/3 chance to flip. That gives runs of varying lengths without a favored state overall. All the forms seen in the 2-2 case occur. "Slow leak" attactors (generalizations of the MM1 and MM5 case) also appear, with an unfavored repelling state and an attractor that "decays" with low probability. Go up to 3 or more states and there appears to be considerably more variety in averages and periods etc.

Report this post to a moderator | IP: Logged

10-01-2004 08:58 PM
Jason Cawley
Wolfram Science Group
Phoenix, AZ USA

Registered: Aug 2003
Posts: 712

Since my previous posts I have been looking at obvious generalizations of MMs. The first is to consider a "machine with input", where you simply have some set of transition matrices and choose among them at each step according to some input bit. The next is just to decompose such an input choice into contributions from any number of sources or "players". The result is a Markov game. The following then form a regular hierarchy in which each is a proper subset of the others -

Markov games
----Finite deterministic games (only 0s or 1s in transition matrices)
----Markov machines with input (1 player MGs)
--------Finite machines with input (1 player, 0-1)
--------Markov machines (1 player MGs always doing 1 move)
------------Finite state machines (1 player, 1 move, 0-1)

Even with quite modest numbers of elements allowed at each step there are huge numbers of these, but they are easily coded in entirely general form, enumerated, run and displayed, etc. The special cases just have 1s in certain parameter values.

Report this post to a moderator | IP: Logged

10-11-2004 10:55 PM
Jason Cawley
Wolfram Science Group
Phoenix, AZ USA

Registered: Aug 2003
Posts: 712

Looking at some of the simple, straight MMs again, typical examples of interesting behavior can occur with a matrix like -

{{0,0.167,0.833,0},{0,0.667,0,0},{0.833,0.167,0,0},{0.167,0,0.167,1}}

By all counts a very simple system, only 4 possible states. State 4 is an attractor and soon takes over the system whatever the starting state (1 on the main diagonal). State 2 is a "garden of Eden" state that occurs only on transients aka from initial conditions, but takes a while to decay (some remaining in a large sample e.g.). 1 and 3 oscillate before decaying into state 4. This is a remarkable variety of simple behaviors within so simple a single system.

When the long run behavior is a probabilistic spread among states, it does not mean they are all the same. For example, with -

{{0,0,0.333,0},{0,0.667,0.333,0},{1,0.333,0.167,0.167},{0,0,0.167,0.833}

The long run macro state is well-mixed noise, with basically 1-3-3-3 distribution of time in states. But at the micro-level, state 4 comes in "runs" (almost always remaining stable, with only one unlikely decay alternative), while state 3 is quite unstable (sprays into all states, and rarely remains in its original one for 2 consecutive steps). Orbits from the other states allow both to be equally well represented in the long run, despite that sharp local difference. Also note the underrepresentation of state 1, a "deterministic repeller", fed only by a portion of an unsteady state and always returning to that source state immediately. Like minor phase of state 3.

Code to evolve a MM without using the enumeration scheme, just entering any state transition matrix manually, is -

evolveSTM[mat_, steps_Integer, sample_Integer] :=
Table[Sort[Table[NestList[next[mat, #] &, i, steps], {sample}]], {i, 1, Length[mat]}]

Note that this still needs the "next" function defined in the previous notebook...

Report this post to a moderator | IP: Logged

10-15-2004 09:46 PM

wolframscience.com  |  wolfram atlas  |  NKS online  |  web resources  |  contact us