Implementing Conway’s Game of Life with Tensor Math
Conway’s Game of Life is a simple simulation that works on a two-dimensional plane. It has just a few very simple rules:
- An empty cell that has exactly 3 neighbors will be populated in the next timestep
- A populated cell requires either 2 or 3 neighbors to stay populated, i.e. it will be unpopulated in the next timestep for any other case.
There are a lot of different ways to implement those rules. This post first looks at a solution in a quite unusual language, namely APL, before showing how to do the same thing using the Java tensor math library ND4J and its upcoming graph variant SameDiff.
As this is a pretty long post, and you might want to refer to some parts of it directly, you can use this table of contents to do so.
- APL
- Conway’s Game of Life in APL
- Conway’s Game of Life in Java with ND4J
- Conway’s Game of Life in Java with SameDiff
- Conclusion
APL
APL is an array-oriented programming language. It has its origin as a notation for (mathematical) thought. The abbreviation APL means literally “A Programming Language”, but has later also taken on the meaning of “Array Processing Language”. Being introduced in 1964 it is one of the oldest programming languages that are still in use, and predates even C, Pascal and Smalltalk by several years.
Due to its origins as a notation, APL is very terse and uses symbols that are easily drawn, but not usually found on any computer keyboard. This terseness has given it a reputation of a write only language among people not used to reading APL programs. APL practitioners argue, that, as a single line of APL often can express algorithms that would take several pages in C-like languages, it shouldn’t be too surprising that it would take equally long to understand it.
APL’s array orientation allows it to work with high dimensional arrays seamlessly. High dimensional arrays are also called tensors these days. Mathematically they are a generalization of scalars, vectors and matrices into higher dimensions and tensor math is at the core of today’s deep learning frameworks. Therefore, it isn’t too much of a stretch to try to see how tensor math libraries and APL compare to each other.
Conway’s Game of Life in APL
Conway’s game of life is one of the many things that can be implemented with just one line of APL.
life←{⊃1 ⍵ ∨.∧ 3 4 = +/ +/ 1 0 ¯1 ∘.⊖ 1 0 ¯1 ⌽¨ ⊂⍵}
APL is evaluated from right to left, so let’s start by looking at each symbol from right to left in order of their appearance. After the symbols are explained, the expression is built up from them in order to explain what is done at each step. This will help build the intuition that will be needed to reassemble it using ND4J and SameDiff.
- life←{…}
- This defines a function. As you can see, there is no explicit parameter definition.
- ⍵
- This references an implicitly defined parameter. It means “the right parameter”, in this case it is the array that defines the game of life field.
- ⊂
- This symbol is a nesting operation, e.g. a two-dimensional array becomes a three-dimensional array, with only a single element (itself) in this new third dimension.
- _¨
- This symbol results in the application of the function left of it (at _) to the parameters both left of it and right of it. It can be interpreted as “each”.
- ⌽
- Horizontal rolling of the given array’s data (with wrap around).
- ¯_
- Marks a number (at _) as negative. Due to the way how a normal - (minus) would be interpreted, there has to be an additional symbol, i.e. a high minus, to be able to differentiate the intention.
- ⊖
- Vertical rolling of the given array’s data (with wrap around).
- ∘._
- Outer product. Applies the given function (at _) to each combination of its left and right parameters.
- _/
- Reduce using the given function (at _) along the first dimension.
- +
- Element-wise addition. On scalars this is just simple addition
- =
- Element-wise equality comparison.
- _._
- Inner product. Uses the function given on the right to combine data along the first dimension and the function on the left to combine those results along the second dimension. They are applied to the parameters right and left of the inner product.
- ∧
- Boolean AND
- ∨
- Boolean OR
- ⊃
- First selector, i.e. given an array, it returns the first entry in it.
With the symbol definitions out of the way, let’s look at what happens at each step. In order to make this a bit more imaginable, let’s define how the game of life field, that is going to be used, looks. This will also define the value of ⍵
or field
for the remaining explanation.
0 0 0 0 0 0 0
0 0 0 1 0 0 0
0 0 0 0 1 0 0
0 0 1 1 1 0 0
0 0 0 0 0 0 0
The pattern on the field defines a “glider”, i.e. a structure that will move across the field, if the simulation runs for multiple steps.
⊂⍵
By nesting the field one dimension deeper, the whole field can be addressed as a single element in APL. This is required for the next step, as it is going to roll the whole field instead of just single lines of it.
1 0 ¯1 ⌽¨ ⊂⍵
Horizontal rolling is now applied to each if the elements in the left array 1, 0, -1
along with the field. This results in one copy of the field being shifted to the left, one staying where it is, and one being shifted to the right.
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0
0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0
0 1 1 1 0 0 0 0 0 1 1 1 0 0 0 0 0 1 1 1 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
Creating those shifted copies resulted in two additional arrays, each one containing the number of left and right neighbors respectively.
1 0 ¯1 ∘.⊖ 1 0 ¯1 ⌽¨ ⊂⍵
The rules require, that all 8 neighbors are examined. The outer product is used to shift each of the newly created fields vertically, such that for each one of them a copy is created that is shifted up, stays in the same position, or is shifted down respectively.
0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0
0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0
0 1 1 1 0 0 0 0 0 1 1 1 0 0 0 0 0 1 1 1 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0
0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0
0 1 1 1 0 0 0 0 0 1 1 1 0 0 0 0 0 1 1 1 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0
0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0
0 1 1 1 0 0 0 0 0 1 1 1 0 0 0 0 0 1 1 1 0
This operation is essentially the same as the first, that was used to create the left and right neighbors, and that first step could have been also written as the following instead, without any difference in the result:
1 0 ¯1 ∘.⌽ ⊂⍵
By applying those shifts into every direction, a 3x3 field of fields was created that, for each direction, contains the number of neighbors a cell has. The rules, however, do not care about the direction of a neighbor, just the count and the original state of the cell.
+/ +/ 1 0 ¯1 ∘.⊖ 1 0 ¯1 ⌽¨ ⊂⍵
Summing up the neighbors along both axes results in a single field containing the number of neighbors each cell has – as long as it was originally empty. If it was populated, it will be one more than that.
0 0 1 1 1 0 0
0 0 1 2 2 1 0
0 1 3 5 4 2 0
0 1 2 4 3 2 0
0 1 2 3 2 1 0
To determine which cells are populated for the next generation, the rules specify that a cell has to have either 3 neighbors if it is unpopulated, or 2 or 3 neighbors if it is populated. Because populated cells include themselves in their neighbor count, it is necessary to factor that in when applying the rules.
3 4 = +/ +/ 1 0 ¯1 ∘.⊖ 1 0 ¯1 ⌽¨ ⊂⍵
An element-wise comparison to 3 and 4 yields two new arrays, which contain a 1 in each place where the comparison was successful.
0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 1 0 0 0 0 0 0 0 0 1 0 0
0 0 0 0 1 0 0 0 0 0 1 0 0 0
0 0 0 1 0 0 0 0 0 0 0 0 0 0
Comparing to 3 catches all instances of cells that either were unpopulated, and had 3 neighbors or that were populated and had 2 neighbors, which directly satisfies the first rule and part of the second rule. However, comparing to 4 yields cells that were either unpopulated and had 4 neighbors, or that were populated and had 3 neighbors. This is a bit of a problem, since the former part should not result in the population of a cell. The problem can be solved by ensuring that only the cells that were already populated are accepted from the comparison to 4.
1 ⍵ ∨.∧ 3 4 = +/ +/ 1 0 ¯1 ∘.⊖ 1 0 ¯1 ⌽¨ ⊂⍵
That can be achieved by using a Boolean AND between the original field and the 4-comparison. As the results of the 3-comparison and the cleaned-up 4-comparison will have to be combined to yield the new field, an inner product version of the AND/OR combination is used. This in turn requires, that there is also something the 3-comparison can be AND-ed with. For this reason the parameters include a 1, which is used to ensure that the result of the 3-comparison stays unchanged. Producing an inner product like that results in a new field.
⊃1 ⍵ ∨.∧ 3 4 = +/ +/ 1 0 ¯1 ∘.⊖ 1 0 ¯1 ⌽¨ ⊂⍵
And because the result is still nested too deep, it is unpacked in the very last step, to produce the outcome of one step in Conway’s game of life:
0 0 0 0 0 0 0
0 0 0 0 0 0 0
0 0 1 0 1 0 0
0 0 0 1 1 0 0
0 0 0 1 0 0 0
After looking at each step to produce the solution, let’s combine them to see the overall idea they represent.
- A field of neighbors is created by shifting the original field by one step in each direction.
- The neighbor field is collapsed by summing up its constituents. This sum is used as the count of neighbors for each cell.
- A comparison applies the rules (almost) individually.
- The result is combined to form the field state after a simulation step.
This same strategy is going to be used with ND4J and SameDiff.
Conway’s Game of Life in Java with ND4J
Using this strategy in plain Java would require a lot of looping constructs, basically taking it down to a very low level of abstraction. But, as most of the operations are obviously tensor math, and the advent of Deep Learning has brought with it a flurry of tensor math libraries, let’s use ND4J to implement it.
ND4J is the computation engine behind Deeplearning4J. It is aimed to be about feature compatible with numpy. So, everything that is shown here is easily ported to Python as well. Being on the JVM it also is usable from many different languages, like Clojure, Kotlin or Scala.
Right at the first step, there is also the first hurdle to overcome. At the time of writing, ND4J (version 0.9.1) doesn’t have a rolling operation. However, a simple matrix multiplication will provide all the necessary rolling. The rolling effect can be achieved by first shifting an identity matrix by the wanted amount, and then multiplying it with the game field. Shifting an identity matrix, or better creating one, still requires a for loop, but it will be the only (explicit) one in this example.
static INDArray createRotationMatrix(int howMuch, int columns) {
final INDArray rotationMatrix = Nd4j.zeros(columns, columns);
for (int i = 0; i < columns; i++) {
final int targetIndex = (columns + (i + howMuch)) % columns;
rotationMatrix.putScalar(i, targetIndex, 1);
}
return rotationMatrix;
}
There isn’t much to this method, it follows a diagonal across the matrix that was shifted into one direction. Calculating the targetIndex
just looks like this, because it allows howMuch
to be a negative number, which in turn allows to easily tell the direction of the shift.
Next, the rotation matrices are precomputed. They can stay the same over the whole simulation, so there isn’t much of a point in recalculating them at each step.
final INDArray horizontalRolling = Nd4j.vstack(
Stream.of(-1, 0, 1).map(it -> createRotationMatrix(it, rows)).toArray(INDArray[]::new)
).reshape(3, rows, rows);
final INDArray verticalRolling = Nd4j.vstack(
Stream.of(-1, 0, 1).map(it -> createRotationMatrix(it, columns)).toArray(INDArray[]::new)
).reshape(3, columns, columns);
This is already a pretty terse implementation of creating, stacking and reshaping the rotation matrices into their required form. Yet, compared to the APL version, this certainly feels unwieldly.
Anyway, horizontalRolling
is now a tensor of the shape (3, rows, rows)
and verticalRolling
is a tensor of the shape (3, columns, columns)
. The rows and columns are defined by the size of the game field, and as the shift is needed across two different axes – rows and columns have to be shifted – they have to be created in a compatible shape.
With everything in place, the actual rolling and spreading out happens with two tensorMmul
calls.
final INDArray rolled = Nd4j.tensorMmul(
horizontalRolling,
Nd4j.tensorMmul(verticalRolling, field, new int[][]{ {1}, {1}}),
new int[][]{ {1}, {2}}
);
Note, that dimensions in ND4J are 0-indexed, i.e. they start at 0 for the first dimension. The vertical rolling matrices are multiplied with the field, along their second tensor dimensions, which results in 3 fields, which are shifted in their respective directions, but also transposed.
Remember, that in simple matrix multiplication the order of operands matters. Because the vertical rolling tensor was used as the first operand, and the multiplication happend along the second game field dimension, the result was transposed.
This transpose is a requirement for rolling along the other axis. The result of this first tensorMmul call, is a tensor with the shape (3, columns, rows)
. The second call once again takes the rolling tensor first, as the result should be transposed back into its original orientation. As the field input now is a 3-dimensional tensor, the matrix multiplications should now happen along the third dimension instead of the second. And it in turn results in a 4-dimensional tensor, of the shape (3, rows, 3, columns)
. Inspecting it in this state would result in a rather unexpected output, as working with 4-dimensional data can be a bit daunting. But, by permuting the axes, it can be made to be easily readable again.
System.out.println(rolled.permute(0, 2, 1, 3));
This is just an intermediate result, so let’s add the calculation to count all neighbors, after finishing the rotations.
final INDArray sum = Nd4j.tensorMmul(
horizontalRotation,
Nd4j.tensorMmul(verticalRotation, field, new int[][]{ {1}, {1}}),
new int[][]{ {1}, {2}}
).sum(0, 2);
ND4J allows to sum along multiple axes at once. So, as the axes for the neighbors run along the first and third place in its shape, they are also the axes that should be used for summing. The result is again of the original shape (rows, columns)
, as the additional dimensions have been consumed by the sum calculation. sum
now contains just the same neighbor counts as they were shown in the APL example.
All that is left to do, is now to keep only those cells that satisfy the rules.
return sum.eq(3).addi(sum.eq(4).muli(field));
ND4J doesn’t have Boolean AND or OR operators, so their math equivalent is used instead. First sum is compared to 4, and to keep only those cells that were originally populated, an element-wise multiplication (in-place variant) is used. Its result is then used to be added to the result of the 3-comparison (again using the in-place variant of addition). In principle multiplication and addition aren’t directly replacements for Boolean operations, but they work out like that in this case, as a multiplication of two Boolean (0/1-valued) values, will again result in a Boolean value. Addition of two Boolean matrices will result again in a Boolean matrix if their elements do not overlap. This requirement is ensured here, as a value can not be both equal to 3 and 4 at the same time.
The result is just the same as in the APL example, but there is no unnesting that has to be done, as the summing has reduced the number of axes even before the rules were applied.
Conway’s Game of Life in Java with SameDiff
SameDiff is the upcoming graph-and-automatic-differentiation API for ND4J. It allows the definition of a computation as a graph, pretty much the same way as tensorflow or pytorch. The SameDiff API is very close to the normal ND4J API. However, it doesn’t include non-differentiable functions, as they would break the automatic differentiation.
A game of life implementation doesn’t require any differentiation, but it also doesn’t require any non-differentiable functions. So let’s take a look at how to implement it using SameDiff.
final SameDiff sd = SameDiff.create();
final SDVariable sdLifeField = sd.var("lifeField", field);
final SDVariable sdVerticalRotation = sd.var("⌽", verticalRotation);
final SDVariable sdHorizontalRotation = sd.var("⊖", horizontalRotation);
First, a SameDiff instance is created in order to allow the registration of variables, and to create additional nodes that will represent further computations. Note that the field and the rotation matrices are the same as those in the plain ND4J example, as they are just plain tensors.
final SDVariable sdSum = sd.sum(
sd.tensorMmul(
sdHorizontalRotation,
sd.tensorMmul(sdVerticalRotation, sdLifeField, new int[][]{ {1}, {1}}),
new int[][]{ {1}, {2}}
),
0, 2
);
final SDVariable result = sd.eq(sdSum, 3).addi(sd.eq(sdSum, 4).muli(sdLifeField));
Again, no surprises while setting up the computation, it looks almost the same as the plain ND4J version, with the only difference being that – at the time of writing – SDVariable instances don’t have .sum
or .eq
methods yet, so they have to be used from the SameDiff instance instead.
As this concludes the definition of the computation, it is now time to actually execute it and get the result.
sd.exec();
return result.getArr();
Still, no surprises here. The graph is executed and the result variable is read to get its value.
This is a very basic example of using SameDiff. It would be possible, for example, to define the game field with a placeholder variable, that only knows about the shape of its content. This would allow the reuse of the same graph for multiple steps of the simulation. Also, game of life is at best an edge case for SameDiff use, as it is designed with machine learning in mind.
Conclusion
This post has set out to implement Conway’s game of life using tensor math. In order to provide an intuition on how to do that, it introduced APL, which in itself can be considered as an appropriate vessel for doing tensor math. The single-line APL expression for the calculation of the next step in Conway’s game of life was explained and this explanation in turn was used to motivate the implementation using Java and the tensor math library ND4J and SameDiff.
If you’d like to play around with APL, you can find an online version at tryapl.org, where you will also find a more detailed walk through of this particular implementation of Conway’s game of life. If you’d like to know about the reasoning behind APL and notation as a tool of thought, you might also be interested in the turing award acceptance speech / paper by Kenneth E. Iverson.
You can find more information about ND4J at nd4j.org/userguide. If you’d like to play around with SameDiff, be warned that, at the time of writing, it wasn’t released yet, but you can get it by using a SNAPSHOT version of ND4J – and as it isn’t released yet, you will also have trouble finding documentation on it, but there are some usage examples you can find in the SameDiffTests.