A New Kind of Science: The NKS Forum > Pure NKS > Distance Transforms
Author
Richard Scott
CDx Labs
Suffern, NY

Registered: Feb 2004
Posts: 4

Distance Transforms

Distance Transforms
I'm interested in applying NKS to image processing and thought it might be fun to look at distance transforms, to see what generalized distance transforms can do. I started thinking about this problem at last year's NKS conference as a pure NKS study.. I would appreciate any further references.

Distance transforms are simple 2 dimensional rules where points are labeled by their closest distance to a set of seed points. They can be implemented by growing the seeds, by assigning the distance of a point equal to its closest neighbor plus a positive increment. The distance increment between neighbors is constant or depends only on the direction of the neighbor.

In a generalized distance transform the increment varies spatially and is calculated dynamically by some rule. The rule should operate in one step to calculate the final value of a point from the final value of its smaller neighbors. Without this restriction, it is possible to define a rule where the result slowly converges to its final value, taking away the possibility of a fast sequential implementation.

A nice property of distance transforms is that the final result does not depend on the update order; parallel or sequential update orders produce the same output distance image. If this were not so, the distance would not be a minimum. A pixel is ready for its final update when all its smaller neighbors have reached their final distances. This leads to efficient algorithms that update each pixel just once: O(N) updates for an image with N pixels, compared with O(N^2) updates which may be needed if pixels are updated in parallel. In a sense a distance transform displays its complete evolution. There is no need for a separate time dimension. The distance is the time.

The causal network of a distance transform is simply an acyclic grid graph (including diagonals) where a node at each pixel has directed edges to its larger neighbors. To avoid cycles a pixel cannot decide on its own distance increments, instead it sets the increments to be used by its (larger) neighbors.

A distance transform is equivalent to a rule on a directed acylic network, and vice versa. From an acyclic network I can set the distances to be the ordering of a topological sort of the network. The seeds are the roots with no incoming edges, and the distance increments are set equal to the difference between neighbors, except that any unwanted edges are effectively removed by setting the distance increment larger than the difference. Multiplying the distance increments and seed distances by a constant does not affect the causal network, so I can simulate the effect of a bounded rule on the network, with output values < 2^n, by scaling the distances by 2^n and using the lower n-bits for the rule.

A generalized distance transform can be thought of as having two rules, one to update the network, or distance increments, and one to update the color of the pixels, or perhaps as one rule with two purposes. I started out by looking at totalistic and rotationally symmetric rules with various fixed kernels and initial conditions. Now I'm starting to look at dynamic kernels.

One way to setup dynamic kernels in a 2-color system is to have two 3x3 kernels, one used by white pixels, the other by black pixels. Another way is to use the colors of the smaller neighbors as distance increments for the larger neighbors, rotating the pattern of smaller neighbors by 180 degrees.

A pixel has 8 neighbors, so there are 2^9 or 512 totalistic rules. Actually since the update rule only uses smaller neighbors and almost all points have 3 or 4 neighbors, the sum of the neighbors ranges from 0 to 4, giving 2^5, 32 rules. For rotationally symmetric rules, the set of smaller neighbors is encoded as an 8-bit number. The minimal encoding among equivalent rotations is Min[Table[bitRotateRight[nbrcode,n],{n,0,7}]]. There are 36 different rotationally symmetric neighborhoods. Taking the rotation and applying it to the 8-bit encoding of the colors of the smaller neighbors gives an 8-bit input to a rotationally symmetric rule. Taking the first 3-bits gives 256 rules, taking the first 4 bits gives 2^(2^4), or 65536 rules.

An interesting question is: can these rules produce circular distances, with distances updated just as Min(3x3 kernel + 3x3 neighbors)? I wasn't sure because for standard 2 dimensional rules the boundary of the circular area grows more slowly than the propagation of the outer edge, needing extra iterations for the intrinsic randomness generation to average out the square kernel effects (see the image in NKS p. 178). Here the complete image is generated in the equivalent of one time step of a parallel CA. But the answer for dynamic kernels is yes (dtGen1k*.gif in dtGenTotGif.zip).

There are some fabulous interactions of structures spiralling out from the center. There are dendritic fracture lines growing on a class 3 background (rule 46506 in dtrot16random.gif). I like rules where the patterns propagate over the diagonals of the image, otherwise they tend to be elementary 1D cellular automata embedded in a quadrant. Many show windmill-like patterns sometimes with random boundaries and spider-web like filaments extending between them, setting off chain reactions where they hit the next blade.

The two dimensional extensions of elementary rule 22 are interesting. Rule 22 (and its complement 151) are the only (I think) one dimensional totalistic rules with complex behaviour from simple initial conditions (NKS p.263). Its totalistic representation is {1,1,0,1,0,0,0,0,0}, meaning that a pixel is 1 if 0, 1 or 3 of its neighbors are 1. Unfortunately I have my numbering scheme backwards. I'm displaying 1 as white, 0 as black, and I failed to reverse the digits. I am calling this rule 416 = 256+128+32. (See distanceTransformTot.gif.) 16-bit rotationally symmetric rule 27031 is equivalent to totalistic rule 416. With unsymmetric initial conditions and octagonal distances the patterns shows almost no hint of directionality, no triangles, no grid, just random close-spaced dots. Dtrot16.gif shows sequential 16-bit rotationally symmetric rules, starting from 27031.

For simple distance transforms without a color rule in the lower bits a small change in the initial conditions creates a small change in the output. In the general case the change propagates almost everywhere it can with greater distance (see propagation416.gif).

Like standard 2D rules these rules sometimes die out after some complex behaviours (rule 182 in distanceTransformRotational4.gif).
Rules 60 in distanceTransformRotational2.gif (and its complement 195 in distanceTransformRotational4.gif) appear to be reversible.
Some rules have interesting boundary growth, eg. rule 137 in distanceTransformRotational3.gif.
Rule 176 with spiral distances in distanceTransformTotalistic.gif shows an amazing variety of behaviours in one image - iteration, nesting, class 3, class 4, strange boundaries, periodic complex repeated patterns, etc.

For the static kernels I used square distances {{1,1,1},{1,0,1},{1,1,1}}, octagonal distances {{3,2,3},{2,0,2},{3,2,3}}, since 3=approx 2 sqrt(2), and 'spiral' distances, {{1,2,3},{8,0,4},{7,6,5}} with the first 8 initial conditions. The right quadrant of the spiral distance map produces a checkerboard alternating pattern of network connections giving rise to a lower threshold and greater variety of complex behaviour (see kernels.gif). In these kernels 0 means infinity, don't use the corresponding pixel. Diamond or city-block distances, kernel {{0,1,0},{1,0,1},{0,1,0}}, didn't produce very interesting patterns.

For the dynamic kernels I multiplied the coefficients by 2 to put the 1-bit color rule in the least significant bit, trying various combinations of the 4 kernels listed above (dtGenTotGif.zip).

It seems that distance transforms may provide an easy way of creating reversible 2D rules (cf. NKS p. 1017). The acyclic network can be reversed to form a new acyclic network. The final distances at seed points become the initial distances. The distance transform kernels are unchanged but are combined by taking Max(3x3neighbors - 3x3kernel). The new seed points are just the places where the original distances are not reversible, where Min[Max[nbrs - kernel]+kernel]!=final distances. In image processing terms, these are the 'skeleton' points. Meanwhile the color-rule applied to the network must be reversible, but this is the 1D problem discussed in the NKS book pp. 436.. I haven't yet tested this.

I have generated too many images to attach them all - just some typical examples. They should all be regeneratable from the attached notebook. Please email if interested in seeing some of the examples that I mentioned above but did not attach.

Related Sections of the NKS Book
There are many sections of the NKS book that can be related to distance transforms.

The notes on pages 1034 and 1035 discuss sequential automata with intrinsic synchronisation where the rules are setup so that a point can only be updated when its neighbors' values are ready. In a distance transform, a point can be updated at any time, but the final update only occurs when the smaller neighbors have reached their final values. The final result does not depend on the update order.

The note on distance metrics, p.1030, mentions one-way distances. In a distance transform, distances are defined from any point back to the closest seed, but are not defined for other pairs of points.

The note on Voronoi diagrams, p.987, has a picture of a digital Voronoi diagram.

An interesting note on Generalized additivity, p.952, explains how any additive rule applied in parallel can show at most repetition and nesting. A simple distance transform is an additive rule since: Min[DistanceTransform[image1], DistanceTransform[image2]] = DistanceTransform[Min[image1, image2]].

The note on General associative rules, p. 956, shows how associativity gives rise to nested patterns. The Min operation of the standard distance transform is associative.

Summary of Distance Transform Properties
A distance transform has four properties:
1. Evolution to a fixed point after which there is no further change
2. Independence from update order. Parallel and sequential updates give the same result.
3. Triangle inequality for distances back to seed points
4. The final value at a point can be calculated in one step from the final values of its smaller neighbors.

Any rule that satisfies these properties can be considered to be a distance transform.
All the properties of distance transforms discussed above derive from these 4 basic ones.

Random distance transforms can be generated by giving each point a random distance >= 0. The seed points are the local minima of the distances.
A more interesting way of generating random distance transforms is to randomly chose kernel values in a range and to randomly select seed points.

Any set of distances can be generated by adding a suitable kernel at each point and taking the minimum. There is no loss of generality by insisting on use of the Min operation. Certainly the triangle inequality is satisfied when Min is used. Conversely, a kernel can always be constructed so that the distance at a point = Min(smaller neighbors + kernel), with kernel values > 0.

Richard Scott, CDxLabs

Attachment: distancetransform.zip

Last edited by Richard Scott on 04-27-2004 at 02:50 PM

Report this post to a moderator | IP: Logged

04-24-2004 05:56 PM
Richard Scott
CDx Labs
Suffern, NY

Registered: Feb 2004
Posts: 4

Some examples of generalized distance transforms..

The left column shows the resultant distances plotted with ContourPlot[].
The center column counts the number of larger neighbors at each point (0 to 8) - this is a representation of the directed acyclic graph, the causal network.
The right column shows the least significant bit of the distances - used to choose the kernel for the larger neighbors.
The numbers above each row show the rule definition and some statistics. For example:

{416, {60 ,40, 60, 40, 60, 40, 60, 40}, {22, 22, 22, 22, 22, 22, 22, 22}, 1, 0.573652, 301}

416 is the totalistic rule (0, 1, or 3 white neighbors) applied to the lsbs (least significant bits) of the smaller neighbors

{60 ,40, 60, 40, 60, 40, 60, 40} is kernel1, used when the lsb is white
{22, 22, 22, 22, 22, 22, 22, 22} is kernel2, used when the lsb is black

Initial condition is 1 (white) at the center of the image (257x257)
Proportion of white lsbs is 0.573652
Equivalent number of iterations needed by a parallel update CA, eg. Mathematica CellularAutomaton[], is 301

{0, 8, {13432, 468, 5736, 23500, 15576, 6312, 0, 0, 1}} is the histogram of the counts of larger neighbors (center column image)

Richard Scott has attached this image:

Report this post to a moderator | IP: Logged

05-13-2004 10:52 PM
Jason Cawley
Wolfram Science Group
Phoenix, AZ USA

Registered: Aug 2003
Posts: 712

I have a pedagogical suggestion. Make a notebook in which the focus is not on building up the functions and speeding them up - put those off in an initialization cell, closed up - but on playing with them, aka just varying a few parameters.

Then, do not put everything in a huge graphics array trying to show exhaustively everything that can happen. Instead pick 3 rules, the first with simple behavior doing nothing interesting, but showing the rule set up. The others, doing interesting but different things. You can have unevaluated full graphics array lines and the like in a final section, also closed up.

As it is, I found myself having Mathematica perform some pretty long calculations to evaluate bits of your notebook, without much idea what the point of any given piece of it was. While I can see various interesting things going on in your pictures, the connection between so many of them and so much code is far too diffuse to see what is happening. Simple examples first.

On substance, it is hard to say more without having the intuition for your critters that only the above would make easy, for me anyway. Whether you have to consider only rotationally symmetric cases is one question I do have, though. Restricting a search space to only include "legal" rules can sometimes throw away the more interesting cases.

Wolfram himself didn't see rule 30 in his first CA runs, because he threw out everything that wasn't left-right symmetric the first time he did it. As soon as he dropped this pre-conception he found much more. I have no idea if something similar might occur with your critters.

Report this post to a moderator | IP: Logged

05-14-2004 12:50 AM
Richard Scott
CDx Labs
Suffern, NY

Registered: Feb 2004
Posts: 4

Hi Jason, please see the test notebook attached..

Attachment: dttest.nb

Report this post to a moderator | IP: Logged

05-18-2004 09:23 PM
Jason Cawley
Wolfram Science Group
Phoenix, AZ USA

Registered: Aug 2003
Posts: 712

Looks great, and they are fun to play with. I can actually see some of what is going on. Obviously I haven't been looking at them for a year yet. It looks to me like a fine bit of NKS as it stands now.

The one thing I found myself encountering while playing around with it that might be improved, was fiddling with the various data fields. You lay them out clearly enough in your initial section. But I just naturally jumped in and fiddled with them first to see what they did. And occasionally put things in as parameters that didn't match what the function expects. You can guess what happens when one does that.

A few comments, or passing some variables to the function after they have been assigned to descriptively named variables might help others with the same inclination to fiddle before they read. But this is a cosmetic bit of "bulletproofing".

ruletype = First[{0,1,2}]
rulenumber = 416
neighbordis1 = {2,2,2,2,2,2,2,2}
neighbordis0 = {2,2,2,2,2,2,2,2}
imagewidth = 65
imageheight = 65
initialcondition = {{1,1,1},{1,1,1}}
clipborder = false

dttest[ruletype, rulenumber, neighbordis1, neighbordis0,
imageheight, imagewidth, initialcondition, clipborder]

See the idea? It let's people see, separated out for clarity, the sort of data structure each field is supposed to be and what they mean, in what is effectively the parameter input portion of the notebook. The rest of the examples beneath it can be just like you have them - just calls to the function with direct input parameters.

dtTest[0, 416, 2{2, 1, 2, 1,
2, 1, 2, 1}, 2{2, 1, 2, 1, 2, 1, 2, 1}, 65, 65, 1, False]

Report this post to a moderator | IP: Logged

05-21-2004 12:51 AM
Richard Scott
CDx Labs
Suffern, NY

Registered: Feb 2004
Posts: 4

distance transforms updated after nks 2006

Here is my latest code and example files, updated after nks 2006.

I'm planning to add some more rule types and develop some distance transform models of, for example, ink blots.

These rules behave a lot like growth rules - perhaps they should be called generalized growth rules - that might be a better name than "distance transform" - though the word "generalized" is unappealing. Any thoughts?

Summary of rule types

Ruletype 0: (512 rules) Totalistic rule for update of lsb. The distance increments to larger neighbors are updated with one of 2 kernels, selected by lsb.

Ruletype 1: (256 rules) 8-bit rotationally symmetric rule for update of lsb. The rule uses the lower 3-bits of the rotated border pattern.

Ruletype 2: (65536 rules) 16-bit rotationally symmetric rule using lower 4-bits of rotated border pattern.

Ruletype 3: (512 rules) Totalistic rule with directional selection of kernel. The kernel is selected using the lsb of the opposite neighbor instead of the center pixel.

Ruletype 4: (256 rules) 8-bit rotationally symmetric rule using lower 3-bits of rotated border pattern, with directional selection of kernel. (This set generates particularly nice pictures.)

Ruletype 5: (65536 rules) 16-bit rotationally symmetric rule using lower 4-bits of rotated border pattern, with directional selection of kernel.

Ruletype 6: (512 rules) Kernel selected by bit 2 of the distance. Totalistic rule runs in bit 1 as before. Supports hexagonal snowflake rule.

Ruletype 7: (512 rules) Totalistic worm. Updates the distance of the first larger neighbor to the left or right of the smallest neighbor, turning left for lsb=0, rihgt for lsb=1

Attachment: dttestpost.zip

Report this post to a moderator | IP: Logged

06-21-2006 05:04 PM
Philip Ronald Dutton
independent
Columbia, SC

Registered: Feb 2004
Posts: 172

neighboring kernels

I was wondering if you have experimented with some notion of kernel "overlap." As a simple example, what if your euclidean gradient (i assume it was output by a some CA code) was very close to another euclidean gradient (maybe you would use the term "kernel")? Once the two CA outputs touch each other then I wonder what kind of visible interference you would have?

__________________
P h i l i p . R . D u t t o n

Report this post to a moderator | IP: Logged

07-12-2006 04:16 AM

wolframscience.com  |  wolfram atlas  |  NKS online  |  Wolfram|Alpha  |  Wolfram Science Summer School  |  web resources  |  contact us