Complexity_ A Guided Tour - Melanie Mitchell [74]
CHAPTER 11
Computing with Particles
IN 1989 I HAPPENED TO READ AN ARTICLE by the physicist Norman Packard on using genetic algorithms to automatically design cellular automaton rules. I was immediately hooked and wanted to try it myself. Other things got in the way (like finishing my Ph.D. thesis), but working on this idea was always in the back of my mind. A few years later, with thesis finished and a research job at the Santa Fe Institute, I finally had the time to delve into it. A young undergraduate student named Peter Hraber was hanging around the institute at that time, looking for something to work on, so I recruited him to help me with this project. We were soon after joined by a graduate student named Rajarshi Das, who had independently started a similar project.
Like Packard, we used a genetic algorithm to evolve cellular automaton rules to perform a specific task called “majority classification.” The task is simple: the cellular automaton must compute whether its initial configuration contains a majority of on or off states. If on states are in the majority, the cellular automaton should signal this fact by turning all the cells on. Similarly, if off has an initial majority, the cellular automaton should turn all cells off. (If the number of initial on and off states is equal, there is no answer, but this possibility can be avoided by using a cellular automaton with an odd number of cells.) The majority classification task is sort of like having to predict which of two candidates will win an election in your city when all you can see are the political signs on your close neighbors’ front lawns.
The majority classification task would be trivial for a von-Neumann-style computer. Simply have the central processing unit count the respective numbers of on and off states in the initial configuration, storing the count at each step in memory. When done counting, retrieve the two totals from memory, determine which one is larger, and reset the configuration to all on or all off depending on this comparison. A von-Neumann-style computer can do this easily because it has random access memory in which to store the initial configuration and intermediate sums, and a central processing unit to do the counting, the final comparison, and the resetting of the states.
In contrast, a cellular automaton has no random access memory and no central unit to do any counting. It has only individual cells, each of which has information only about its own state and the states of its neighbors. This situation is an idealized version of many real-world systems. For example, a neuron, with only a limited number of connections to other neurons, must decide whether and at what rate to fire so that the overall firing pattern over large numbers of neurons represents a particular sensory input. Similarly, as I describe in chapter 12, an ant must decide what job it should take on at a given time—in order to benefit the colony as a whole—based only on its interactions with a relatively small number of other ants.
The bottom line is that it is in general difficult to design cellular automata to perform tasks that require collective decision making among all the cells. Peter and I were interested in how a genetic algorithm might solve this design problem.
Using Norman Packard’s work as a starting point, we coded up simulations of one-dimensional cellular automata in which each cell is connected to three cells on either side—that is, the size of each cell’s neighborhood (including the cell itself) is seven cells.
Think for a minute how you might design a rule for a cellular automaton like this to perform the majority classification task.
One reasonable first guess for a rule might be: “Each cell should update to the state that is currently a majority in its local neighborhood.” This would be like basing your election prediction on which