## Roger White, Guy Engelen, and Inge Uljee

Print publication date: 2015

Print ISBN-13: 9780262029568

Published to MIT Press Scholarship Online: May 2016

DOI: 10.7551/mitpress/9780262029568.001.0001

Show Summary Details
Page of

PRINTED FROM MIT PRESS SCHOLARSHIP ONLINE (www.mitpress.universitypressscholarship.com). (c) Copyright The MIT Press, 2022. All Rights Reserved. An individual user may print out a PDF of a single chapter of a monograph in MITSO for personal use. Subscriber: null; date: 30 June 2022

# Theory and Consequences

Chapter:
(p.13) 2 Theory and Consequences
Source:
Modeling Cities and Regions As Complex Systems
Publisher:
The MIT Press
DOI:10.7551/mitpress/9780262029568.003.0002

# Abstract and Keywords

Cities are systems maintained far from thermodynamic equilibrium by a constant inflow of energy and an outflow of entropy in the form of waste and pollution. Such systems exhibit complex, not entirely predictable, but increasingly ordered behaviour. In contrast to the predictable, law-like behaviour of equilibrium systems, they have open futures, so a formal treatment must be algorithmic rather than purely mathematical. Relatively generic models of self-organizing systems developed by John Conway, Stephen Wolfram, Christopher Langton, Stewart Kauffman and others have resulted in “candidate principles” characterizing such systems. One such principle is that self-organized systems (e.g. cities) tend to have a fractal structure. The urban models discussed in this book are much more detailed and realistic than the generic models, and this permits a variety of empirical tests as well as planning applications. But both tests and applications have unconventional characteristics because of the open futures nature of the models.

• World is crazier and more of it than we think,
• Incorrigibly plural. I peel a portion
• Of tangerine and spit the pips and feel
• The drunkenness of things being various.

—Louis MacNeice (1907–1985)

Cities are complex. They are also self-organizing. The complexity is obvious to anyone who has any experience of a city, even though it is usually just as obvious that the complexity is highly organized and more or less functional. It may be less obvious that cities are self-organizing since we who live in them create them, and do so with a high degree of intentionality. Through our individual, purposeful decisions, and especially through the collective decisions of developers, governments, planning departments, and special-purpose agencies like transit authorities, we intentionally modify and extend our cities so that they satisfy our various needs and desires. But it is precisely the number and variety of agents involved that means that the city is, in the end, largely self-organized. Each individual decision is made in the context of the situation existing at the time, so that each is guided and constrained by the cumulative result of previous decisions. As a result, even though individual features of the city reflect the intentions of the individuals and organizations responsible for them, the structure of the city as a whole emerges largely without anyone having decided it. For this reason, it makes sense to think of the city as emerging from an endogenous process of self-organization: the city creates itself. This view is well represented by Aldo Rossi, an Italian architect who was concerned with urban form and who, following the French urban geographer Georges Chabot, maintained that “the city is a totality that constructs itself and in which all the elements participate in forming the âme de la cité” (Rossi, 1982, p. 55).

We know and understand cities by living in them. Occasionally, this understanding is made explicit, as in Jane Jacobs’s classic book The Death and Life of Great American Cities (1961), a work that has had a lasting influence, most likely because (p.14) it seems to capture some useful truths about the way cities work. Although the “data” consist largely of her own observations and lived experiences in New York, and thus reflect the real complexity of that particular city and its âme, Jacobs was able to generalize, to draw lessons from the changes she observed over the years and to show why loss of complexity means death for any neighborhood, or even for an entire city. Her approach has a deep resonance with the theory of complex self-organizing systems because she recognized the irreducible role of idiosyncratic detail, the functional role of complexity, and the necessity of a diachronic approach, that is, a focus on change.

The vast literature on cities produced by historians and sociologists, as well as by planners, geographers, and others, has greatly extended and contextualized our indigenous knowledge. This literature is mostly rooted in a humanistic tradition that includes the softer social science approaches, and thus retains a wealth of detail and nuance reflecting the richness of the city itself. Standing in contrast to this tradition is one that has been characterized as scientific. Though grounded in neoclassical economics, this more recent approach encompasses urban and regional economics, and it also involves urban and economic geography, transportation engineering, and several other fields. Its aim is to produce a formal theory of urban form and structure, one that is deductive in nature and based on a few postulates of human behavior such as utility maximization and rational choice. Its language is mathematics, although it places increasing importance on inferential statistics. Despite its notable success in several specific domains such as traffic prediction and the spatial behavior of retail customers, it is increasingly marginalized as a mainstream approach to understanding urban systems because most of its theoretical results are unrealistic and unusable. As a case in point, the mathematical treatment of the spatial pattern of urban land uses has become increasingly sophisticated, but the outcome continues to be a city of concentric zones, each with a single land use; all detail, all complexity, and all realism are lost (see, for example, Alonso, 1964; Angel and Hyman, 1976; Fujita, 1986, 1989; Papageorgiou, 1990). Indeed, social scientists, except for economists, have for the most part rejected this approach as a way of understanding cities.

Applying the theory of complex self-organizing systems to cities is, in a sense, an attempt to generalize and formalize the qualitative understandings developed within the framework of the humanities and social sciences. This essentially scientific approach aims to capture the inherent complexity of the city, including the continual transformations by which the city makes and remakes itself; despite its formal methodology, however, it shares several essential properties with the approaches of the soft social sciences and humanities. Specifically, it generates and works with histories, it recognizes and relies on the fundamental role of context in explanation, and its predictive power is inherently limited. In these respects, it is much closer to qualitative methodologies than it is to the classical scientific approach dedicated to the (p.15) search for universal laws. On the other hand, since its foundations are in formal, computable systems, this approach retains the logical rigor and explicitness of the classical “hard” sciences. In this sense, its theories are objective: their logical consequences can be examined explicitly and extensively.

The complex self-organizing systems approach has its roots in two schools. The first is the Brussels school, growing from the work of Ilya Prigogine at the Université Libre and the Solvay Institute, both in Brussels (the synergetics approach, originating in the work of Hermann Haken at Stuttgart, is broadly similar). The second is the Santa Fe school, associated with the Santa Fe Institute in New Mexico, founded by physicists from the Los Alamos labs. Although both schools share the same underlying philosophy, the emphasis and methods of their approaches are somewhat different. Prigogine’s approach is anchored in the natural sciences, specifically, in the behavior of systems driven by an input of energy—in other words, essentially all of the systems we are concerned with on this planet, from purely physical ones like the weather to biological, ecological, and social systems. The Santa Fe approach, on the other hand, centers on the use of relatively abstract computer models and seeks to understand in a general way how complex systems self-organize and adapt. In other words (and exaggerating the differences), the Prigogine approach investigates the behavior of real systems, whereas the Santa Fe school investigates the algorithmic logic of model systems. In fact, both approaches are useful, especially when combined. Although the focus of this book is on real systems—cities and regions—our methods involve simulation models.

# Far-from-Equilibrium Systems

“From being to becoming” is the expression Prigogine (1980) used to describe the emergence of the science of self-organizing systems—the science of becoming—from the classical science of universal laws describing the behavior of entities that already exist—the science of being. The phrase nicely emphasizes the fundamental problem that classical science ignores: where do new things come from? How is novelty possible? The problem involves time.

Classical science largely involves laws that are time reversible, which means in effect that, by observing the system through the lens of the laws, we step outside time and see the entire system, past and future, all at once; in other words, for us, the observers, time does not exist. As long as the laws can be expressed adequately using the language of logic and mathematics, the world we know through them will be one of being rather than one of becoming, because logic and mathematics are themselves without time (see box 2.1).

As Ilya Prigogine and Isabelle Stengers (1984) point out, however, the development of thermodynamics in the nineteenth century—and in particular, the (p.16) (p.17) formulation of the second law, the law of increasing entropy—brought time into physics in an essential way. The second law holds that, in the kinds of systems treated by statistical mechanics, time is not reversible: there is an arrow of time. Although the second law once violated the sensibilities of many physicists, it has endowed statistical mechanics with a characteristic essential to sound theory: the ability to generate testable predictions. Any isolated thermodynamic system can be predicted to evolve to the state of maximum entropy possible given the system’s environment. Thus, for example, if you do not drink your cup of coffee, it will cool to room temperature; if you do, it will cool in your stomach to your body temperature.

Prigogine asked what happens when energy is pumped into a system to push it ever farther away from its thermodynamic equilibrium. The answer is that it will organize itself into macro-scale structures. If the cold coffee is poured back into the pot, the molecules of liquid, which at this point have only Brownian (i.e., random) motion, will, when the pot is heated, organize themselves into macro-scale convection cells of organized flows. Of course, since the second law still applies, there must be a corresponding, though greater, increase in entropy elsewhere, in this case, in generating and transmitting the power used to heat the pot and in heating the pot itself (Prigogine and Stengers, 1984). The price of self-organization (lower entropy) is always higher entropy elsewhere, often in a form we refer to as “pollution.” Self-organizing systems structure themselves by exporting entropy.

The process of self-organization that occurs as a system is pushed farther from thermodynamic equilibrium by greater energy inputs can be illustrated in a literal example that also has a broader metaphorical value. Think of a river system draining into the sea. If we put a canoe into one of the headwater streams, we can drift downstream with no effort, letting the current carry us. Our lazy holiday trip comes to an end when we reach the mouth of the river because there is no more current to carry (p.18)

Figure 2.1 Equilibrium location for the canoe is the mouth of the river regardless of the starting point; it is thus predictable.

us (figure 2.1). At this point, we have reached thermodynamic equilibrium. Having started the trip from a position of high potential energy—i.e., low entropy—we have been carried along as the system evolved toward its maximum entropy, equilibrium state. Wherever we start our trip, in whichever tributary, we will always end up at the same spot—the mouth of the river. In other words, this entropy maximizing system is predictable, as we would expect since it is an instantiation of the second law of thermodynamics.

Now, having arrived at the mouth of the river, we turn the canoe around and head upstream. Suddenly, everything is different. First, in order to move at all, we must paddle hard; laziness is no longer an option. Then, as we move upstream, we continually come to choice points—do we take the right fork or the left? Do we have a particular goal in mind, like the headwater where we left our car? And if so, do we have a map and know how to use it? Or are we just exploring? In other words, as we put energy into the system by paddling and move ever farther from thermodynamic equilibrium, we find an increasing number of possible system states—that is, (p.19)

Figure 2.2 When the canoe is paddled upstream, away from its equilibrium position, its final location cannot be predicted.

tributaries in which the canoe could be located (figure 2.2). The system state is no longer predictable by a simple law. Rather it depends on a history of choices made at the bifurcation points, that is, points where one possible system state splits into two. What determines the choice? Perhaps pure chance; but as long as we are paddling the canoe, the choice may be guided by a goal and constrained by the limits of knowledge (do we have a map? If so, how good is it?), competence (can we read the map?), and physical ability (is the fork we want to take navigable, or is the current too swift for us because of flood conditions?). The system thus seems quite different. It is no longer predictable. Given the initial state, we can no longer say what the final state will be. The final state is the result of a particular contingent history, the history of our choice at each bifurcation point. Consequently, explanation by universal physical covering law must be replaced by historical explanation, and that can become quite complicated as various relevant factors are included, factors that were not relevant when the system was moving toward its maximum entropy, equilibrium state.

(p.20) The physics of these two types of situations is closely tied to the mathematics used to analyze them. In the downstream case (figure 2.1), a potential energy function can be defined over the area of the river basin and differential calculus used to find the minimum of this potential function and show that it corresponds to sea level. In the upstream case (figure 2.2), the potential function will show all the local minima corresponding to a given energy input, but will give no indication as to which solution will be chosen—hence the unpredictability.

Pushing the example of a far-from-equilibrium system well into metaphorical territory, if we send many canoes upstream from the mouth of the river, we may find that the various headwaters each collect roughly the same number of canoes; or we may find that many headwaters collect just a few canoes, a smaller number collect a larger number of canoes, and a very few headwaters collect many canoes. In other words, this far-from-equilibrium system can generate a variety of possible patterns of canoe clusters. We might whimsically think of these clusters as canoe cities, and the ensemble as a regional system of such cities (figure 2.3). Of course, real systems of cities, and the cities themselves, are also far-from-equilibrium systems in that they depend on a constant inflow of energy in the form of food, gas, electricity, and so on. They construct and maintain themselves by dissipating the energy in higher-entropy forms such as sewage, garbage, and greenhouse gases.

In general, then, when we inject energy into a thermodynamic system, we convert simple, law-like, predictable behavior into complex, unpredictable, but increasingly ordered behavior. Nevertheless, it is the same system. The only difference is that, instead of taking free energy out of the system, we are putting energy into it. And if it is the same system, if our study of it as it moves to equilibrium is characterized as science, then our study of it as it moves away from equilibrium must also be characterized as science, though a different kind of science—one with more complications and less certainty, but one yielding just as much understanding.

All of the global systems of interest to us are far from thermodynamic equilibrium and are undergoing continuous self-organization in the sense described by Prigogine. Tectonic processes that shape the surface of the earth are driven by heat from radioactive decay in the interior of the planet. Surface processes, from ocean and atmospheric circulation to biosphere dynamics, are driven by continuing inflows of energy from the sun. And human societies, with their increasingly elaborate economies, are also ultimately dependent on the flux of solar energy, as well as on fossil fuels. Of course, ecosystems and human societies are more than just physical systems. Unlike the purely physical systems, which are simply collections of blind molecules, they are composed of goal-directed entities, that is, living organisms. In the case of a bacterium, the goals may be simple, whereas at the other extreme, the goals of human beings and their organizations are multifarious and complicated. But in both cases, as physical systems far from thermodynamic equilibrium, they undergo a process of (p.21)

Figure 2.3 If many canoes are sent upstream, they will map out all of the possible final locations—the far-from-equilibrium steady states—and will collectively constitute a system of “canoe cities,” analogous to an urban system, with many small aggregations of canoes and a few large ones.

self-organization. The goals and intentionality found in living systems simply serve to mediate and direct the process, as when, paddling our canoe upstream, we have to decide which fork to take. Of course, in modeling cities, we will be focused on the mediation process—the culture-based tastes and desires, the economics, the politics, and the planning—rather than the physics. But it is important to recognize that at the most fundamental level we are dealing with a physical system. This perspective may ultimately lead us to deeper insights into the relationship between the natural environment, on the one hand, and our economy and society, on the other.

# The Algorithmic Approach

The key to modeling cities as self-organizing systems is to treat them not as artifacts but as processes—which means embedding the model in time. From this perspective, the natural language of modeling is the algorithm, since an algorithm is a (p.22) representation of a process. An algorithm must normally be executed in time, step by step. Although some special algorithms can be “solved” to find the outcome without executing them, much as a difference or differential equation can be solved to find the value of the variable as a function of time (e.g., equation 2.1.2 in box 2.1), the halting theorem shows that, in general, this is not possible. The output of most algorithms can only be known by executing them, step by step, in time. This is the truly revolutionary aspect of the computer: the program, while executing, is an algorithm embedded in time, from which it cannot be removed. It thus allows us to treat far-from-equilibrium systems formally since these systems are also inextricably embedded in time as they organize or create themselves. The phenomena emerging in far-from-equilibrium systems are the ones that have always resisted treatment by conventional scientific laws. Because far-from-equilibrium systems have open futures, they are creative; therefore, universal laws can have little to say about them.

On the other hand, algorithms can represent far-from-equilibrium systems, thus permitting us to explore the possibilities inherent in them. But what kind of algorithms? For insights into the basic nature of complex self-organizing systems—insights that are in a sense analogous to the universal laws of classical science—simple algorithms that capture the generic behavior of such systems are appropriate. The original algorithm of this kind was the cellular automaton (CA), conceived by Stanislaw Ulam in the late 1940s at the Los Alamos National Laboratory as a simple tool for exploring the nature of dynamical systems (see box 2.2). Since then, CA have been ever more widely used to explore the general nature of dynamical systems because computationally they are highly efficient—they run fast. This is important because it means that a wide variety of situations can be investigated quickly to give comprehensive results; Michael Batty (2013) emphasizes this point as well.

When it was proved that the Game of Life, an extremely simple CA with two cell states and three transition rules, was capable of universal computation (Poundstone, 1985), CA quickly became a favored technique for investigating the nature of complex self-organizing systems. Later, other types of algorithms useful for investigating complex adaptive systems were developed, such as classifier systems, artificial neural networks, and random Boolean networks. The algorithmic approach is associated with the Santa Fe Institute since much of the generic, abstract work on the nature of complex systems has been carried out there. If the Brussels school was instrumental in developing a natural sciences–based theory of complex systems, researchers at the Santa Fe Institute were in effect working toward a formal theory. Although there is no fully developed formal theory as yet, there are strong suggestions of basic principles, which appear most centrally in the work of Christopher Langton and Stuart Kauffman.

Langton approached complex adaptive systems by examining the behavior of a simple generic model—a one-dimensional CA, but one with a relatively large cell (p.23) (p.24) (p.25) (p.26) neighborhood as well as a relatively large number of cell states (therefore it is actually a class of CA). Since the state of a cell in a CA depends on the configuration of cell states in its neighborhood, this generic CA has a large number of possible transition rule sets. Each rule in a rule set specifies which cell state will be the outcome of a given neighborhood configuration, with one of the states designated as the “quiescent state.” The many different possible sets of transition rules can then be characterized by the proportion, λ, of rules in a rule set that leads to the quiescent state. Langton showed that lower values of λ result in a steady state or simple limit cycle equilibrium, whereas higher values lead to a chaotic churn of cell states. Between these two regimes, however, values for λ near a critical value, λc, generate extremely long, highly structured transients, and the transient length increases exponentially with the size of the cell space. These transients have fractal properties and are apparently capable of computation. Interestingly, they have intermediate values of Shannon entropy. Computation requires (1) information storage, which is possible only with the stability and order characterized by low Shannon entropy; (2) information transmission, which raises entropy; and (3) the apparent randomness or chaos of information transformation itself, corresponding to high entropy, so that the required dynamical balance occurs at an intermediate level of system entropy (Langton, 1992). In other words, the CA generates interesting, complex configurations—dynamic structures that are capable of computation—when it is poised on the boundary between simple order and chaotic behavior. Langton says of this, “Life exists at the edge of chaos.”

Stephen Wolfram, a pioneer of systematic research on CA properties, had earlier discovered similar complex behavior (his class IV behavior) in simple one-dimensional CA (Wolfram, 2002), but Langton’s results provide a much richer understanding of the phenomenon. Langton has shown similar behavior in another class of CA, and it has been noted that the Game of Life also has a rule set with λ ≈ λc (Mitchell, Crutchfield, and Hraber, 1994, p. 508). Of course, as Langton himself recognizes, none of these results prove that a CA with λ ≈ λc will necessarily support universal computation, and Mitchell, Crutchfield, and Hraber (1994) pose some qualifications to Langton’s hypothesis; but the results are very suggestive. In any case, for purposes of modeling actual complex systems, it is probably not important that the algorithm (p.27) being used is strictly capable of universal computation as long as it is able to generate sufficiently rich behavior. What is important is that it can capture the phenomena being modeled. Indeed, although the CA used for most of the urban models described in this book are sub critical (λ < λc) in their deterministic form, they are augmented by a random perturbation so that, in effect, they function as edge-of-chaos models. As we will see in chapter 5, this feature of the model structure seems to reflect a real duality in the way cities function.

Also at the Santa Fe Institute, Stuart Kauffman (1993) used random Boolean networks rather than cellular automata to investigate the behavior of complex systems. Kauffman pursued this research program intensively over several decades, and his work, along with Langton’s, is responsible for many of the deep insights into the nature of complex adaptive systems. He chose to work with Boolean networks because, unlike the cell neighborhoods of CA, they place no restriction on the pattern of connections, although, otherwise, they are quite similar to Langton’s CA. As a biologist, Kauffman seeks to understand life—any possible life, not just the life we know on this planet. His aim therefore is a formal understanding of adaptability and evolvability, the two characteristics he considers essential for life.

The Boolean network models are quite general: they consist of a network of N nodes, each of which can be in one of two states, with each node connected, on average, to K other nodes. As in a CA, the state of a node depends on the state of the nodes in its neighborhood—that is, those nodes to which it is directly connected. Connections and initial node states are assigned randomly. In Kauffman’s interpretation, the nodes represent genes, the node states, alleles, and the whole network, the genome of an organism.

Node states, like alleles, depend on one another. For example, a node may be turned on or off depending on the on or off state of the nodes it is connected to. Given a set of rules specifying how a node’s state depends on those it is connected to, it is possible for a set of nodes to become frozen in a given configuration, so that the states of these nodes cannot be affected by nodes outside of the frozen set, even though they are connected to those nodes. For large values of K, that is, for highly connected systems, the states of nodes in the network fluctuate chaotically. As K becomes smaller, isolated frozen sets appear. For K = 2, the frozen set percolates through the whole network, leaving isolated regions of chaotic dynamics that cannot communicate with one another since no signal can cross the frozen structure. In this regime, the system has a relatively small number of attractors, good resistance to perturbations, and, when fitness values are assigned to the node states, an ability to adapt by hill climbing on the associated fitness landscape. In other words, the behavior of the system is, in a sense, optimally complex, consisting of a mix of ordered and chaotic dynamics that gives it the ability to adapt, which is similar to the ability to compute. K = 2 is therefore apparently a critical value analogous to Langton’s (p.28) λ ≈ λc. For K = 1, the system is functionally modular, consisting of a number of isolated subsystems that cannot affect one another (Kauffman, 1989); these are analogous to the simple patterns that appear in Langton’s low-λ CA, or to the patterns generated by Wolfram’s class I and II CA.

Assigning fitness values to alleles (node states) means that each network has an associated fitness landscape that is a function of the network structure and the individual allele fitness values. The organism represented by the network can then move about on the fitness landscape by flipping alleles. This allows it, in principle, to improve its fitness by climbing the peaks on the landscape. When several species of interacting organisms are involved, the fitness landscape is a function of the coupled Boolean networks. In this case, as one species climbs a peak on its fitness landscape, that landscape is being deformed by the adaptive behavior of another species climbing on its own landscape. For lower values of K, the deformation is slower than the movement uphill, so that adaptation is possible, but the peaks are lower, so mean fitness is relatively low. For higher values of K, the peaks are high, but the deformation is so rapid that it is not possible to climb them before they move, so in this case too, mean fitness is low. Separating these two regimes is a critical value of K at which hill climbing is just possible, and mean fitness is maximized (Kauffman, 1994). In the vicinity of the critical value, avalanches of coevolutionary change propagate through the system in response to even minor changes in alleles. These avalanches have a power law distribution.

Cities, too, may be thought of as consisting of a collection of “species”: families, convenience stores, supermarkets, manufacturers, transit systems, and so forth, competing for resources such as land, access, customers, and money. For a city to function successfully, these species must coadapt toward a state where their mean fitness is optimized. Although in the urban models described in this book we do not model coadaptation explicitly, it is present implicitly in the calibration: the models are calibrated to produce results that have the characteristics of successfully coadapted cities. Coadaptation could be introduced explicitly by embedding a market model for land and by modeling the individuals (people, businesses, organizations) that make up the populations of the “species.” The models would then be (in part) self-calibrating.

# Candidate Principles and Domain-Specific Models

Kauffman’s results using Boolean networks are similar to Langton’s findings using CA. In both cases, it seems that complex, highly structured behavior emerges at a boundary between simple order and chaotic churn. Furthermore, the complex behavior has the capacity for a rich functionality that both the ordered and the chaotic regimes lack: the ability to compute or the ability to adapt and evolve toward an (p.29) optimum state, which is also, in effect, computation. Kauffman (1994, p. 84) suggests that this is a candidate principle (a “putative principle” in his words), and candidate status is perhaps as much as we can hope for at present. If there are no covering laws for the behavior of far-from-equilibrium systems, then it seems likely that there are no universal principles describing the behavior of formal complex adaptive systems. On the other hand, the halting theorem is such a principle, and so others may exist.

Another quasi-principle that has been proposed is that far-from-equilibrium, self-organizing systems produce fractal structures (see box 2.3). In Langton’s CA, the long transients appearing at the critical value of λ usually contain fractal patterns, as do Wolfram’s class IV CA; and the power law avalanches of coevolutionary change in the critical regime of Kauffman’s Boolean network are also a fractal feature. It is significant that fractal structures emerge along with complexity in these model systems (indeed, they characterize the complexity) because far-from-equilibrium natural phenomena also typically have fractal properties. Coastlines (Mandelbrot, 1982), river systems (Mandelbrot, 1982; Huang and Turcotte, 1989; Thornes, 1990; Maritan et al., 1996), riverbeds (Montgomery et al., 1996), and pulses of contaminants in rivers (Kirchner, Feng, and Neal, 2000); trees (Mandelbrot, 1982), lungs (Mandelbrot, 1982), and extinctions in the fossil record (Solé et al., 1997); the distribution of marine species (Haedrich, 1985), the distribution of marine prey as well as the movements of their predators (Sims et al., 2008), and the size of clusters of ant colonies (Vandermeer, Perfecto, and Philpott, 2008); human travel (Brockmann, Hufnagel, and Geisel, 2006), the pace of life and innovation in cities (Bettencourt et al., 2007), and the growth of corporations (Stanley et al., 1996)—all have a fractal structure. And so do cities. All of these phenomena are generated and maintained by a constant flux of energy. There is movement toward a consensus, although no proof, that all complex self-organizing, far-from-equilibrium systems are characterized by fractal structure. Per Bak has developed this point most thoroughly, and refers to it as the “principle of self-organized criticality” (Bak, 1994, 1996).

For many systems, the fractal structure may take the form of a power law distribution of some quantity, where the frequency of the phenomenon is inversely proportional to a power of its size. For example, the number of species on an island as a function of island size, the number of trips to the downtown of a city as a function of distance, and the number of patches of residential land use in a city as a function of size of patch are all described by an equation of the form y = axn. But in spatially extended systems like cities, the fractal nature often appears as a characteristic form, for example, an extremely convoluted edge; the edge of a city, like a coastline, is typically a fractal, and its form can be represented as a power law.

Another general feature of self-organized far-from-equilibrium systems is that the self-organization emerges from a series of bifurcations, where the system must (p.30) (p.31) choose one of the possible states that open up to it as energy is put into the system. Bifurcations are often discussed as phenomena arising in natural systems, but of course they are mathematical phenomena as well, where the number of possible solutions to an equation may increase as the value of a parameter becomes larger. A classical geographical example is the spatial cost function, $Cj=∑riwidijn$, which gives the total cost, Cj, of shipping products to the destination, j, from a number of origin points, i, where ri is the rate charged on the route from i to j, wi is the amount shipped, and dij is the distance; the parameter n represents the fact that shipping costs are usually not strictly proportional to distance. If we are looking for the location j that will minimize the total shipping costs, the problem is unambiguous for n ≥ 1: the equation has a single solution, that is, a single minimum. But n = 1 is a bifurcation point. For n < 1, there are a number of solutions, that is, a number of local minima (see box 3.2). These minima are analogous to the fitness peaks in Kauffman’s Boolean network models; thus, when Kauffman finds that the number of peaks grows as the mean number of connections increases, he is describing a bifurcation phenomenon. In many cases, for example, urban land use patterns, self-organization occurs as a result of the system passing through a series of bifurcations. The bifurcation structure itself may be a fractal, as in upstream travel on the river system of figure 2.1, and the result may then be that the self-organized system has a fractal structure.

The candidate principles that apparently govern the behavior of complex self-organizing systems and the quasi-universal attributes that describe them have emerged in the course of the last forty years as a result of work with relatively simple, highly generic models such as those we have discussed. These principles, though interesting in their own right, also provide support for more realistic, domain-specific models, including models aimed at practical applications. This is not to say, however, that the lessons of the generic models can be applied directly in these more specific models. The candidate principles, even if we are eventually able to drop the qualifier “candidate,” remain principles rather than covering laws. Even in the simple generic models we have discussed, in any one run of the model, they may not be manifest, or they may appear only as a weak tendency, or with a long delay. The same is true of the fractal characteristics. Nevertheless, the principles can help in the verification of a model’s structure: if a detailed model of a particular complex self-organized system is not compatible with the principles, that is, if it can never manifest them, then it is probably not the right model. Similarly, to the extent that the phenomenon being modeled exhibits various fractal characteristics, the model must be able to reproduce them; fractal measures thus contribute to model validation.

More generally, the formal knowledge we have of generic complex self-organizing systems provides guidance in formulating appropriate models in specific domains such as urban spatial structure. It tells us, for example, not to build an equilibrium (p.32) model of urban land use like the Alonso-Muth model (Alonso, 1964; Muth, 1969). Even though the urban land market generally clears and is thus apparently in equilibrium at any particular time, this in itself tells us little about how actual land use patterns are established and evolve. On the other hand, it is often useful to embed equilibrium models like a land market model as elements in a dynamic model. In this context, the equilibrium models represent mechanisms operating in the real system and help structure the dynamics of the larger model. They are analogous to the frozen structures in Kauffman’s Boolean networks operating in the critical regime. In fact, those structures are limit cycles on an attractor—that is, they represent equilibrium behavior embedded in the nonequilibrium dynamics of the global model.

As a matter of practical convenience it is often useful to design a model so that it emulates a model operating in the critical regime between order and chaos, rather than designing it so that it actually possess such a regime. For example, the urban land use models discussed in this book would, if treated as deterministic, operate far into the ordered regime and produce unrealistic, simple land use maps having no fractal characteristics, whereas, with a random element, they produce complex, realistic maps characterized by fractal properties. The random element is, in effect, an efficient way of generating a large number of deterministic transition rules that together produce critical regime dynamics. From another point of view, the random element can be looked at as a simple way of emulating a heterogeneous population of agents (individual people, businesses, etc.) each with a land use decision rule; without this emulation, the deterministic model has only a few agents—one for each active land use—and a correspondingly small set of rules.

# Techniques for Modeling Cities and Regions

All of the models presented in this book are simulation models of dynamical systems. Most are based on cellular automata. But as we have seen, alternatives to CA exist, and the choice of technique depends on what is to be modeled. In principle, Boolean networks could be used to model urban systems; indeed, for some particular problems, they might be quite appropriate. But, in general, it is not clear how to align the structure of a Boolean network with the urban phenomena that we might be wanting to model, such as land use or the spatial distribution of employment. Artificial neural networks have occasionally been used (e.g., White, 1989), but, again, their structure is not easily aligned with the structure of most urban phenomena. Ideally, the modeling tool we use should be one that can represent directly and explicitly the important elements—the phenomena and the relations among them—of the system we wish to model. The closer we come to a one-to-one relation between the elements of reality and the model, the less the model is a black box and the greater the confidence we (p.33) can have that it is an appropriate representation and can be relied on to give us useful insights into the behavior of the real system.

Cellular automata, Boolean networks, and artificial neural networks can all be viewed as agent-based systems, where the agents collectively constitute a computational technique, as in Langton’s swarm intelligence (Minar et al., 1996), but they do not necessarily represent any actual individuals. In contrast, in individual-based models, the agents do represent real individuals in the system being modeled—for example, real people, real businesses, or real houses; or, in the case of the nodes in Kauffman’s Boolean models, real genes. The advantage of using individual-based models to model complex systems is that they permit a system to be represented explicitly in as much detail as required, with the self-organizing macrostructure emerging bottom up by means of interactions among the individual agents. The results are rich and detailed, and our confidence in them depends not just on validation testing of model output, but also on the fact that we can see a one-to-one relationship between the model structure and the structure of the system being modeled: if the model looks like reality, then it is more likely that it functions like reality and has similar outcomes.

In principle, therefore, individual-based models are the most appropriate basis for simulating urban systems, and they have in fact been fairly widely used for that, either by themselves or in combination with cellular automata (see, for example, Filatova, Parker, and Van Der Meer, 2009; Parker and Filatova, 2008; Parker, Berger, and Manson, 2002; Parker et al., 2003; Portugali and Benenson, 1997; Portugali, 2000; Marceau and Benenson, 2011; Jin and White, 2012; Power, 2009, 2014). There are important practical considerations that limit their usefulness, however. The first is the apparently banal issue of run time. Because the urban models of interest here are all complex systems models of far-from-equilibrium systems, their behavior, as we have seen, is not entirely predictable, either because of chaotic dynamics or because of sensitivity to random perturbations in the initial conditions or behavioral rules. The self-organized structures that they generate emerge through a series of bifurcations, where the system must choose between two possible futures. In other words, these model systems have open futures. In order to calibrate an individual-based model and then to map out its possible behaviors and their relative probabilities, the model must be run many times. For this to be practical, the run time should be short, ideally, minutes. A model with many agents, perhaps millions in the case of an urban model, each with relatively realistic (i.e., complicated) behavioral rules, is not fast. Furthermore, there may be an aggregation problem since generally we are not interested in individual agents, but rather in the meso- and macro-scale patterns that emerge from their collective behavior and interaction. A related problem is that it may be difficult to link the emergence of particular patterns to specific features of the model. In short, individual-based models are ideal for modeling systems of (p.34) modest size, or systems in which it is not possible to simplify or generalize the representation of agent behavior without losing essential characteristics of the system. But they are cumbersome for very large systems or systems where agent behavior can reasonably be generalized to some degree.

Cellular automata are ideal for modeling many urban phenomena because they have two great advantages: they are inherently spatial, and they are fast. A CA is by definition spatial, and therefore spatial phenomena can be mapped directly onto the cell space. In fact, a two-dimensional CA can be seen as a dynamic geographic information system (GIS): a map in a raster GIS is indistinguishable from the state of a CA at a particular iteration. In a sense, a CA simply adds a process to a raster geographic information system so that maps evolve in response to the rules of the process given their current state. The raster structure, together with the fact that the cell neighborhood and the transition rules are fixed, means that the execution time is typically very fast. In contrast, in an individual-based model, the “neighborhood” of an agent consists of the set of all those agents with which it interacts. That set is not in general fixed, but rather changes from agent to agent, and from one time period to the next because it is determined in part by the behavior of the agents.

Of course, it is possible to define a CA cell space that is not a raster of square cells. An isotropic CA in which each cell is represented by coordinates chosen at random within it has been used to eliminate the geometric artifacts that can be generated by a regular raster (Markus and Hess, 1990), and in the name of realism, cadastral maps and maps of land use polygons have replaced the raster of square cells in some models. These alternate specifications of the cell space certainly have advantages and can be justified in the name of realism—of building a model that replicates the actual system. But, typically, they severely degrade run time. One model using land use polygons to define the cell space required several hours to calculate one time step in a typical application. At that speed, it was impossible to run the model enough times to ever develop a comprehensive picture of its behavior. On the other hand, not all cell space modifications degrade run time. In fact, some, like the variable grid raster described in chapter 8, are introduced specifically to maintain fast execution times when other modifications to the CA would otherwise seriously slow run times.

Modifications to cell space are just one way that the classical CA as described in box 2.2 has been altered. In fact, researchers have tortured it almost out of recognition in the name of building realistic models of particular phenomena, and we are as guilty of this as anyone. No defining CA characteristic has been spared:

• Cells As already described, regular grid cells have been replaced by other polygons, both regular and irregular.

• Cell space Homogeneous cell space has been replaced by inhomogeneous space.

• (p.35) Cell states Discrete cell states have been replaced by continuous quantitative states, and even by vectors of states containing both discrete and quantitative representations.

• Cell neighborhood The small local neighborhood has been replaced, in some cases, by a scattered, noncontiguous set of cells and, in others, by a contiguous but very large neighborhood, in at least one case covering the entire modeled area.

• Transition rules Simple rules have given way to elaborate, and occasionally model-generated ones.

• Updates The simultaneous update rule for cell transitions has been replaced by asynchronous updates.

• Dynamics Even the autonomous dynamics of the classical CA have been reined in by imposing exogenous constraints.

As we have mentioned, the two key considerations when modifying the classical CA are realism and run time. Unfortunately, these goals are often in conflict, so that there is a trade-off between them. In too many cases, however, the trade-off is ignored and realism is optimized at the cost of increased run time. Collectively, the models discussed in this book make use of cellular automata that have been altered in most of the ways listed, always in the name of realism, but always in a way that does not degrade run time. The variety of ways that CA can be modified enhances their power and versatility, and this has ensured that CA remain the technique of choice for many spatial modeling problems.

# Methodological and Epistemological Issues

Whatever technique we choose for modeling complex self-organizing systems, we find ourselves up against a number of methodological and epistemological issues that have not yet been fully resolved. In this respect, it seems that the science of these systems is indeed, as some have claimed, a new kind of science because it is giving rise to a new kind of philosophy of science. Traditionally, science has relied on the principles of reproducibility and predictability. Within this convention, scientific theories are verified through empirical testing, which requires that the theory make predictions that can be compared with data from actual observations. But what if the theory, for any specific application, predicted a variety of possible outcomes? If the theory predicted that these were the only possible outcomes under the specified conditions, and our empirical data did not correspond to any of them, then we would conclude that the theory was wrong. On the other hand, what if the data supported one of the predicted outcomes? Although the theory would have passed the test for that particular outcome, the other predicted outcomes might be ones that could never actually occur. If we had some way of knowing this to be so, then (p.36) we would conclude that the theory was wrong in spite of its success on the particular test. Or if there were many more possibilities than those predicted by the theory, and we were able to know this, we would again conclude that the theory was defective.

One way around this problem is to make many tests, to see whether, in aggregate, the empirical observations do correspond to each of the predicted alternatives, and to no others. But this approach raises other questions. Are the various tests really comparable? Or are they instead tests of somewhat different theories? The problem is clear in the case of an urban land use model. When the model is applied to a specific test city, it will generate a large set of possible land use maps, and most of these will fall into one or another of a small set of classes of maps that are quite similar to one another; these similarity classes represent bifurcations. But there is only one empirical land use map, and we cannot rerun the real city many times to generate an ensemble of them to compare with the ensemble of maps generated by the model. If the empirical map falls into one of the high-probability similarity classes, our confidence in the model is strengthened; but we still have no way of knowing whether the other predicted classes are real, or only artifacts of a bad model. One way around this problem is to apply the model to other cities. But the maps of the other cities will be completely different, so the similarity classes, and even the number of these classes, will also be different. Furthermore, we will be testing a slightly different model for each city because the model will have to be recalibrated for each one and thus will have different values for some parameters; furthermore, the land use categories will typically be defined somewhat differently for each city, and even for apparently equivalent classes, there may be differences due to different data sources or classification algorithms. In short, the situation is messy, and clean tests are not possible.

On the other hand, messiness is not the same as complexity, and the nature of complexity opens the way to a partial solution to these problems. The classical scientific paradigm developed in a context in which theories were deterministic and data could either be collected clean in the laboratory or cleaned up by statistical means. In the inductive quantitative social sciences, the working assumption is that there are underlying laws, or at least regularities, at work, but that these are hidden by the messiness—usually characterized as “noise”—that is to be found in almost all data sets. The laws are to be found by using statistical techniques to clean up the empirical data and extract the regularities. Cleaning up the data means that the many data points are replaced by a representative value such as the mean, and the “noise” thus eliminated is quantified by a measure such as the variance. The essence of this approach is to seek laws by destroying data.

The complex self-organizing systems view of this situation is quite different. The messiness in the data is most likely not noise, but rather an expression of the (p.37) complexity generated by self-organizing systems. However, because in these systems a single process can generate a large number of possible outcomes, it is generally not possible to discover the underlying process (or “law”) inductively from empirical data. A powerful example of this is provided by Daniel Brown and colleagues (2005). Using a CA with a given set of parameter values to model land use, they generated two land use maps from the same initial conditions. Because of the bifurcation phenomenon, these maps were noticeably different: one had a large cluster of a particular land use in the northwest part of the map; the other had a similar cluster, but in the southeast. They treated one of these maps as an actual land use map and used it to calibrate the model. When the calibration was optimized, the model performed well at reproducing the map used to calibrate it but could not produce the other map. On the other hand, the original, correct, model—correct in the sense that it was the one that produced the “observed” landscape—did not perform as well as the model derived inductively because it frequently produced patterns that were quite unlike the “observed” one. In other words, the inductive procedure produced an incorrect model, and the correct model seemed to perform suboptimally.

Since, in general, it is not possible to generate good models of complex systems inductively, they must be created a priori. For human systems like cities, this is usually not such a difficult task because complex systems models are, as we have seen, bottom up: they generate the complexity from relatively simple local rules, and we often have relatively direct access to these rules because we can observe them in our daily lives. In other words, our experience can frequently guide us as we formulate a model, and our intuition about whether the structure of a model is a reasonable representation of the system we are trying to understand is often reliable. Of course, we cannot be satisfied with intuition. We must test the model, and at this stage, the complexity becomes useful. Unlike statistical models, which destroy data, complex self-organizing systems models generate data—often enormous amounts of data—although, of course, it is artificial data. As a consequence, there are many ways to characterize the output, and thus many measurements that can be made, and this increases the testability of the model.

For example, we may test a land use model by comparing an output map with the corresponding map of the actual land use. In many cases, this is done by making a cell by cell comparison and calculating a statistic like Kappa. For many purposes, including those for which Kappa was developed, this is an appropriate procedure. But, for evaluating a complex self-organizing systems model of land use, it is not. Because the model produces not one map but many, the appropriate comparison at the cell level would be between the probability of a particular land use and the actual use. But more relevant are comparisons made at the level of land use polygons because it is the patterns of land use that are important, and there are very many ways that patterns can be characterized. For example, there are measures of polygon (p.38) shape such as those found in the FRAGSTATS software, measures of contiguity of polygons of different land use, wavelet-based measures, various local and global measures of fractal dimension, and so on. This is an active area of research, with new techniques appearing regularly (see Boots, 2006; Hagen-Zanker, 2008). But most of these measures are based on pattern characteristics that would be quite similar across all of the maps produced when a single application is rerun many times. For example, if we rerun the model repeatedly, we will get many maps, some of which will appear quite different from one another; but they will all have very similar values for the fractal dimension describing the patch size frequency distribution, even though the actual location of patches may be quite different on the various maps. Most of the other measures of pattern will also have this property: a given measure will yield very similar values across the range of maps.

To a significant degree, these various measures are independent of one another. Consequently, a model may perform very well on some of them and poorly on others. Although it is relatively easy to calibrate even a bad model so that it will perform well according to one or two measures, only a model that is both essentially correct in its structure and a relatively realistic representation of the system being modeled will be able to give good results according to a larger number of measures. Thus, although the Alonso-Muth land use model, a neoclassical equilibrium model that predicts concentric, single-use zones around the center of the city, passes one test—empirically, the various land uses do in fact differ in their mean distance from the center, so that there is in effect a statistical tendency toward concentric zones—it fails almost any other test. By not treating the self-organized city as a complex system, it fails to generate complex results—the concentric zones have no complexity, no patches of various sizes, no irregularities, no fractal properties. In contrast, complex self-organizing systems models of a city generate complex land use maps that can be tested in a wide variety of ways, and that therefore have a wide variety of ways in which they can succeed—or fail. According to Karl Popper’s falsifiability principle, this is a strength.

Traditionally, tests of theories are thought of in binary terms: either the test supports the theory or it refutes it, although it has long been recognized that the world, even the world addressed by science, is messy, and one test is rarely enough to make or break a theory. Popper (1959, 1963) proposed a more nuanced criterion for evaluating a theory: the more powerful a theory’s predictions, that is, the more improbable they are a priori, the more confidence we can have in the theory if the predictions are not falsified in a test. Since complex systems models produce voluminous, complex output that can be characterized in many ways, they can also be subjected to many independent tests. The greater the number of tests, the less probable it is a priori that the model will pass all of them. Therefore, the more tests and the greater the proportion of tests that the theory passes, the greater the confidence we can have (p.39) in it. The multiple test approach also provides a partial solution to the dilemma posed by Brown: that the model that most reliably reproduces the actual land use map may well be the wrong model. Often the model calibration that appears optimal by a standard map similarity measure like Kappa fails other tests of pattern, such as those involving fractal dimensions. Calibrating the model to balance the optimization over a range of tests reduces the risk of a spurious calibration, though it does not eliminate it.

In his first book, Popper (1959) privileged falsification over verification on the grounds that strict verification of a universal statement is logically impossible outside the realm of mathematics and logic. In subsequent decades, he continued to develop the point of view implicit in this position, so that he ultimately arrived at an idea of science that could hardly be described in terms of logic: evolution would be the more appropriate word. In this view science, like the world it seeks to understand, is open, creative, undetermined. Ideas—and the theories and models that may express them—are just as much a part of the world as physical objects, and just as capable of acting as causal agents. This conception of the world and science is very much in harmony with the complex self-organizing systems approach, and it poses many of the same methodological questions. It is the beginning of a broader view of science, and of scientific methodology. It has already given rise to an appropriate philosophical basis—evolutionary epistemology. The root of the scientific imperative for definitive tests, for verification, was the desire to have certain knowledge and to know that it was certain. Perhaps the basic problem in the philosophy of science was to show that this was possible. Popper showed that it was not. Evolutionary epistemology develops the position that what it is logically possible to know evolves with the world and our knowledge of it. In a strange way, this position embeds epistemological philosophy in the real physical, biological world and makes it contingent on the state of that world. In other words, evolutionary epistemology does not stand outside the world, examining it from the outside; it is inside the world, examining it from the inside. It parallels the discovery that algorithms are necessarily embedded in real time, in the world; they do not stand outside and independent of it, as mathematics does. This is a form of realism.

# Implications for Planning

Urban and regional models based on a complex self-organizing systems approach are significantly more realistic than other types of urban models. This gives them the potential to evolve into a powerful tool for planners, although it is not always obvious to planners why such a tool would be useful. Planning practices vary widely around the world. Some countries practice a relatively comprehensive and technocratic form of planning to guide future spatial development. Others, however, do not (p.40) attempt to determine the future pattern of land use; their planners focus instead on general characteristics like densities or technical specifications such as those required for the provision of streets, and thus have little interest in a land use forecast. But even in the countries where spatial planning is practiced seriously, planners may question the need for a land use forecasting tool since the point of their plans is to specify what the land use will be. In the Netherlands, for example, a CA land use model being considered for adoption was criticized on the grounds that it forecast a land use pattern that did not match the official plan. It was assumed that the future land use would be as shown in the plan.

Indeed, there is a real question as to whether planning is actually effective or merely illusory. Nurit Alfasi, Jonatan Almagor, and Itzhak Benenson (2012) examined the effect of the 1980 comprehensive land use plan for the area surrounding Tel Aviv. The plan specified in detail both the areas that could be developed for residential, commercial, and industrial uses and the areas that were to be retained as agricultural and natural areas. The authors found that, by the year 2000, 65% of all development and 75% of residential development did not conform to the 1980 plan because of variances and modifications made during the implementation period. On the other hand, in some cases land use restrictions have been effectively enforced. The London greenbelt, for example, has been maintained for more than sixty years, though at the cost of unplanned-for development beyond it. The lesson is that it is difficult to comprehensively override the intrinsic dynamics of urban or regional development by means of a prescriptive plan, although it is feasible to guide development if the process is understood and if there is a way of testing the effects of various interventions. As a planning tool, an urban or regional land use model can play a significant role in providing such guidance.

Whereas comprehensive land use planning may be absent or relatively ineffective in many places, infrastructure planning is universal and always has a major impact, whether anticipated or not. Here the need for reliable estimates of future development patterns is widely recognized since new infrastructure is typically planned to meet anticipated needs. In the case of highways especially, there is a recognized need for planning with integrated models, since the introduction of new transport elements into a region will alter the development pattern, and this will in turn modify future infrastructure needs. This problem led to the first integrated dynamic models of urban regions in the 1960s. For example, the Penn-Jersey model for the Philadelphia area modeled the relationship between extensions of the transportation system, changes of population and employment by zone in the metropolitan area, and zonal changes in demand for transportation capacity. Although the same general approach was used in models developed for several other metropolitan areas, ultimately it passed from the scene. Today there is a resurgence of interest in land use–transportation interaction (LUTI) models because the need for them is widely (p.41) recognized and because the required resources in terms of data, modeling techniques, and computing power are now widely available. Complex systems models based on CA bring new capacities to the LUTI models. In particular, they model demand with much higher spatial resolution than conventional approaches, and because of the multiple outcomes that characterize these models, they give a better idea of the uncertainty inherent in the system being modeled.

The complex self-organizing systems perspective suggests that effective spatial planning should be based on a realistic understanding of the spatial dynamics of urban and regional development. Successful planning would then consist of guiding that development in directions that are both feasible and desirable. The models embodying this perspective provide a way of investigating the likely effects of various policies that might be implemented to guide development in the desired direction. For example, what would be the effect of building a new ring road (beltway or loop in the United States)? Would the road be likely to increase the pressure to develop a particular area of productive agricultural land? The complex systems approach offers the possibility of reducing reliance on prescriptive land use regulations while increasing the effectiveness of land use planning. It may also improve the effectiveness of infrastructure planning by bringing about a better match between infrastructure needs and provision. One barrier to its more widespread adoption by planners, however, is the presence of multiple outcomes due to bifurcations.

Although planners are used to working with various scenarios, a complex systems model will produce a range of outcomes, many of them quite similar, but some quite different, even when applied to a single scenario. This can be confusing to end users, who, experience shows, tend to treat the output of one model run as the predicted outcome and to ignore the others. On the other hand, the prediction of multiple possible outcomes is potentially the most valuable feature of a complex systems model for many planning problems. Given that it is not possible to predict the future absolutely, the most important prediction is the qualitative one. What are the major possibilities, and how likely are they to occur? The many runs of a complex systems model together map out the bifurcation tree and, to the extent that the model is correct, give us this information. The model can then be used to run “what if” experiments on various policy options to find ones that increase the probability of arriving at one of the more desirable possible futures, and thus reduce the likelihood of ending up with an undesirable one. This is a relatively abstract way of looking at the planning problem, but nevertheless an important one because it emphasizes getting the big picture right.

Moving complex systems models into planning applications promises not just to give planners a new tool, but also to deepen our understanding of cities and broaden our knowledge of the possibilities and limitations of the models themselves. With repeated applications in a variety of cities and regions, and with consequent (p.42) modifications and extensions, we should acquire a nuanced sense of the degree of confidence that we can have in these models under various circumstances. This amounts to evolving more successful models, models in which we can have ever greater confidence. The process is very much in the spirit of evolutionary epistemology. Complex self-organizing systems have been characterized as embodying both contingency and necessity. In treating the models as part of the system that is evolving them, we have added a third element—intentionality.