RELEVANCE
Best brief introduction to complexity that I have seen. A number of useful concepts in the following passage from the introduction to Miller book A Crude Look at the Whole.
The extended metaphor around maps and mapmaking is worth focusing on and thinking about. We too are mapmakers.
The idea that reductive thinking or breaking things down is fundamentally different than constructivist thinking is powerful and worth remembering. When we analyze companies we are in a reductive mode, but when we try to project financial into the future that is a constructive mode and the two are actually completely different.
Finally, the most powerful single idea to understand about complexity and take from Miller is that simple well defined and limited sets of rules can produce patterns of behavior of enormous complexity and diversity. That simplicity and complexity are linked in their own way. In literary terms, we might say that the hedgehog and the fox are linked in the same way as simplicity and complexity.
A CRUDE LOOK AT THE WHOLE
Miller, John H.
[Emphasis in the passage below is ours.]
Introduction: True Places
It is not down in any map; true places never are. —Herman Melville, Moby Dick
Science is about mapmaking. It’s about taking a complicated world and reducing it to some sparse set of markings on a map that provides new guidance across an otherwise incomprehensible, and potentially hostile, landscape. A good map eliminates as much spurious information as possible, so that what remains is just enough to guide our way. Moreover, when the map is well made we gain a deeper understanding of the world around us. We begin to recognize that rivers flow in certain directions, towns are not randomly placed, economic and political systems are tied to geography, and so on.
Maps—and science—are often more about what we leave out than what we put in. As Jorge Luis Borges catalogs in his one-paragraph-long short story “On Exactitude in Science,” “The Cartographers Guilds struck a Map of the Empire whose size was that of the Empire, and which coincided point for point with it. The following Generations, who were not so fond of the Study of Cartography as their Forebears had been, saw that that vast Map was Useless.”
Different maps—even of the same landscape—provide different insights into the world. A topographic map provides information on the various hills and dales in the world in just enough detail to be useful to a hiker. A road map, with its sparse set of major cities and the roads that connect them, provides just enough information for a cross-country drive. Divorcing a map from its purpose inevitably leads to frustration. Too little of the right kind of detail, or too much of the wrong kind, encumbers our ability to understand the world.
Science has proceeded by developing increasingly detailed maps of decreasingly small phenomena. At the heart of this reductionist strategy is a hope that once we have detailed maps of the smallest of parts, we can paste the mosaic together and have a useful map of Borges’s Empire. That strategy fails, and while the result might please Borges’s Cartographers Guild, the mosaic is as much a fool’s errand as Borges envisioned.
The problem lies not in the incompleteness of our knowledge but in the dream—no, the fallacy—of reductionism. Reductionism fails because even if you know everything possible about the individual pieces that compose a system, you know very little about how those pieces interact with one another when they form the system as a whole. Detailed knowledge of a piece of glass does not help you see, and appreciate, the image that emerges from a stained-glass window.
Over the past few decades a new science has been brewing. It is a science that recognizes that there are fundamental principles governing our world—such as emergence and organization—that appear in various guises across all of the nooks and crannies of science. For example, in physics, individual atoms organize into magnets, in biology, cells organize into organisms, and in economics, traders organize into markets. The universality of these principles was a surprise to scientists accustomed to thinking in terms of scientific disciplines, and by necessity, this new science transcends the traditional boundaries imposed by our current academic institutions. It is a science where simple things produce complexity and complex things produce simplicity. It is a science that embraces new investigative tools, such as computers serving as modeling substrates, in order to escape the bounds imposed by our usual collection of scientific tools, such as the various pieces of mathematics, largely derived in the late 1600s, that we so often rely on today. More fundamentally, it is a science that challenges our traditional notion that understanding comes from reducing things to their simplest components.
Alas, the new science we are after, the one that may hold sway over critical aspects of our life and destiny, is, as Herman Melville says, “not down in any map; true places never are.” Science as currently practiced—with psychology separate from economics, physics separate from biology, and on and on—has been remarkably productive. The creative destruction of scientific ideas, with its inherent quest to define the frontier by publicly disclosing, evaluating, and correcting ideas, has provided us with an engine of insight. The cost, however, is that individual fields have become increasingly separated from one another intellectually. Taking an exact look at a small piece of the world has become the academic norm and has almost fully displaced taking what my Santa Fe Institute colleague Murray Gell-Mann calls “a crude look at the whole.”
That may seem a minor problem, but we see its importance when we look at the true places we wish to explore. Take any global-scale, societal challenge, such as financial collapse, climate change, terrorism, epidemics, revolution, or social change: not one neatly aligns with any particular academic field. Moreover, even if one did, the reductionist approach still may not let us understand the whole. The fundamental principles of complexity describe how even simple parts, once together, seemingly take on a life of their own. Having intimate knowledge of, say, each part of an engine, every bolt, piston, cam, and so on, tells us little about what happens when we put those pieces together and they begin to interact with one another. Moreover, such intimate knowledge gives us no insight into what would happen to the engine as a whole if, say, we increase the size of one of the cylinders.
Reduction gives us little insight into construction. And it is in construction that complexity abounds.
From agoras to amoebas, from bees to brains, from cities to collapse, and on up to zebra stripes, the world around us is an encyclopedia of complexity. Sometimes this complexity arises shaped by natural forces such as evolution, as in the consciousness that emerges from our brains. At other times we have a hand in its creation, as in the steady stream of prices that arises from the seemingly chaotic noise and gestures in a commodities trading pit. Without a science of complex systems, we have little chance to understand, let alone shape, the world around us.
The initial academic discussions of complex systems can be traced back to at least 1776, when Adam Smith, in his Wealth of Nations, briefly discusses the “invisible hand” as a force that leads self-interested traders to unintentional, socially desirable outcomes. Of course, scientific propositions that are based on an invisible hand are more akin to the invocation of a deity than to a scientific theory and are about as useful to an economist as one of Rudyard Kipling’s just-so stories is to a biologist trying to explain how a leopard gets its spots.
The modern movement of complex-systems thinking can be tied to the beginnings of the atomic and information ages, when scientists such as Stanislaw Ulam and John von Neumann, using some of the world’s first programmable electronic computers, began to blur the lines between traditional academic fields as they pursued questions such as whether a machine could be truly self-reproducing. Out of this effort arose a class of models that, starting with a collection of simple, well-defined pieces and interactions, results in a surprisingly rich set of global patterns.
The study of those patterns was an important step toward understanding not just the purpose of an animal’s markings—say, camouflage—but also how they arise. Is it necessary that there be some master plan contained within the DNA of a leopard that specifies the color of each location on its skin, similar to how a digital image file directs the color of each pixel on a computer display, or is there a more universal explanation that can tell us how a leopard gets its spots?
The simple mathematical and computational models begun by Ulam and von Neumann have given us a lens through which to look at the origins of such complexity. We find that the combination of simple pieces, locally interacting with one another, is sufficient to lead to global behavior that is rather alien to its origins. Thus, the likely answer to how the leopard gets its spots—or how a lowly (but dangerous) sea snail gets its shell pattern, or even how the cacophony of a trading pit results in a well-organized set of trades and prices—is at once far simpler, far more universal, and far more fascinating than we might imagine.
Over the last few decades, the study of interacting systems has opened up a new frontier in our understanding of complex systems. Whether we consider abstract models running at the speed of light inside a computer or the carefully curated anthropological evidence of a century of rice farming, a small set of core principles governing complex systems has emerged. Interacting systems develop feedback loops among the agents, and these loops drive the system’s behavior. Such feedback is moderated or exacerbated depending on the degree of heterogeneity among the agents. Interacting systems also tend to be inherently noisy, and such randomness can have surprising global consequences. Of course, who interacts with whom is a fundamental property of these systems, and such networks of interaction are an essential element of complex systems.
Core principles such as feedback, heterogeneity, noise, and networks can be used to understand new layers of complexity. For example, there are complex systems, such as your mind, that generate coherent and productive decisions in a completely decentralized manner, seemingly without control. Other systems, facing deeply embedded constraints such as getting oxygen to all of the cells in your body, lead to scaling laws that can take seemingly disconnected parts of the world and align them along a simple relationship. Yet other systems, such as the members of a social movement, self-organize into critical states that begin to exhibit a common characteristic behavior. Many interacting systems develop cooperation among the agents, a complex behavior that, once arisen, allows agents to shift into a new realm of opportunity, and we are now in a position to understand such a transition. Finally, by repurposing methods and ideas that were first developed at the dawn of the modern science of complex systems, we can generate a new theorem about the behavior of adaptive systems. These core principles driving complex systems, and their application to understanding new layers of complexity, are the focus of the pages that follow.
…
Complex systems often have some inherent degree of randomness tied to the behavior of the agents or the structure of interactions. Perhaps surprisingly, such randomness can be useful. We often dread randomness in systems. Indeed, a key dictate in modern business management is to seek quality by removing all sources of randomness from any process. Given such imperatives, it is easy to think of randomness as a foe to be fought rather than as an opportunity to be embraced. The study of complexity suggests otherwise. Randomness is fundamental to Darwin’s theory of evolution, which relies on the notion that errors (variations) during reproduction will provide grist for the mill of selection and result in “endless forms most beautiful and most wonderful.”
Darwin’s theory, and the role of randomness therein, is really about discovery on rugged landscapes. Our ability to discover new opportunities, whether new forms of animal life or novel technologies, is tied to both the ruggedness of the underlying landscape and our search skills. On simple landscapes, even simple searches can find good outcomes. On rugged landscapes, such searches founder.
Landscapes become more rugged as the elements that compose them interact more. Suppose we are seeking, say, a novel drug cocktail to fight some disease. If each drug we add to the mix has an effect that is independent of the others, then we can quickly find the best cocktail just by adding the drugs one at a time and keeping only the ones that improve the cocktail’s overall efficacy. However, if the drugs interact with one another, this simple search strategy breaks down, as the various interactions no longer provide a clear signal on how best to proceed.
It turns out that the introduction of randomness can greatly improve our ability to search on rugged landscapes. As James Joyce noted, “Errors . . . are the portals of discovery.” Just as evolution relies on variation to uncover most wonderful forms, introducing errors into a search can be a powerful strategy for discovery.
Accepting randomness in a system forces us to give up some control. Yet when we are facing hard problems, this may be the right thing to do if we want to improve the outcome. More generally, it may be the case that carefully controlled, centralized systems are more of a modern artifact, driven by reductionist thinking, than a universal norm. Indeed, there are plenty of examples where the principles of feedback, heterogeneity, and randomness conspire to create complex systems that are without centralized control, yet quite productive. Effective decentralized decision making may be one of the best new old ideas to emerge from complex systems.
When we think about decision making, our natural tendency is to focus on our own decisions. Over the last few decades entire academic fields have been devoted to understanding how humans make decisions. While unraveling the mysteries of our deciding brain is a worthy enterprise, it is far too easy to overlook the vast number of decisions that take place elsewhere in the biological world. To take just one example, bacteria exist in environments that contain both useful and harmful chemicals, and thus they constantly must make life-and-death decisions about where to move, given the trade-offs among various opportunities. How is this possible without a brain? Even more intriguing, humans (presumably using a brain) and bacteria (presumably not using one) demonstrate similar patterns of choice errors in simple experiments.
The notion that one doesn’t need a brain to make good decisions is startling. From the lone bacterium on up to large-scale social systems such as honeybee hives and financial markets, we are surrounded by decision making. How can a swarm of honeybees make good decisions? The queen is not the leader. She leads a rather insular life, serving as a well-tended egg-laying machine, able to emit only signals about her health and existence, rather than operating instructions to the rest of the hive.
Karl von Frisch’s discoveries about honeybee communication in the late 1940s inspired generations of scientist to undertake the careful observation and analysis of honeybee behavior. Through this work, we are beginning to understand how a colony can sort out its various options and make good decisions without any central leadership. One particularly important decision for a colony—the difference between its perpetuation and demise—is finding a new location when the old one becomes too crowded.
A swarm of bees solves the problem of finding a new location through the use of a few simple rules and feedback mechanisms. Scout bees, after identifying a potential new site, advertise it to other scouts. The better the site, the more vigorously the scout promotes it. This decentralized process allows the sites to be sorted out and suitably investigated, and ultimately it results in the swarm tending to choose the best site relatively quickly without any central direction.
Understanding such decentralized processes has numerous benefits. It solves an interesting, life-or-death case of honeybee natural history. It also shows how decentralized mechanisms can be used to solve hard problems. This suggests an approach that we might be able to hijack for our own use in, say, coordinating computer networks or large-scale human organizations. Finally, and perhaps most profoundly, such decentralized mechanisms give us new insights into related phenomena. For example, perhaps bees are to neurons as hives are to brains. Are swarm decisions akin to human consciousness?
Complexity arises in systems of interacting agents. Take some agents with simple behavior, connect them together in a particular way, and some global behavior will result. Alter the connections and, often, new global behavior arises. Given this, knowing how patterns of interactions—that is, networks—influence behavior is fundamental to understanding complex systems.
Even in simple models such as lakeside neighbors competing to keep up with one another, interesting patterns begin to emerge. Starting from such a simple system, we can alter the connections slightly and find radically different behavior taking over. Indeed, by introducing only a few long-range connections, we find that it may be a small world after all, where anyone can connect to anyone else using only a few intermediaries. If neighbors can connect to one another, they can influence one another. Thus, the networks that define neighborhoods drive system-wide behavior. This behavior is often surprising. For example, a well-mixed world where neighbors are tolerant of others easily segregates into neighborhoods of homogeneous types.
One of the more surprising principles coming out of the complexity that abounds is the existence of scaling laws. Starting in the late 1800s, biologists began to notice that, when appropriately scaled, various physical and physiological features of a variety of organisms aligned in a simple way. A simple rule links the metabolism of a single cell to that of a blue whale. Knowing the heart rate and weight of, say, a mouse allows us to predict the heart rate of, say, a thousand-pound cow. The ability to make such predictions is tied to the fundamental constraints that govern such complex systems. In this case, limits on how densely we can pack the pathways needed to provide resources to the organism drive the scaling.
Scaling laws arise in other complex systems as well. The size of cities or firms tends to follow well-defined laws, with the largest having twice the size of the second-largest, three times that of the third-largest, and so on. Similarly, in a book, the word that is most commonly used is twice as likely to occur as the next most commonly used word. Even the number and death tolls of wars are governed by a scaling law.
Knowing the scaling laws that govern our lives provides a portal into our future. For example, over the last century we have seen a trend toward urbanization. More than half of the world’s population now lives in urban areas. Is such a trend good or bad for humanity? The answer to this question is tied to the coefficients of various scaling laws of cities. These will tell us whether more urbanization will allow us to use fewer resources, be more inventive, and so on. Similarly, the scaling laws of wars may hint at how many conflicts with how many deaths we are likely to see in the future.
In complex social systems we often see the emergence of cooperation. Agents in systems can either compete or cooperate with one another. Competition makes you slightly better off, while cooperation makes you much better off. Unfortunately, most social systems have incentives that favor, at least individually, competition over cooperation. Such systems can easily end up with the inferior, competitive outcome.
Notwithstanding incentives to compete rather than cooperate, complex social systems may find ways to achieve the cooperative and socially superior outcome. On the island of Bali, farmers have been farming the picturesque rice terraces sustainably for more than a thousand years. This cooperation persists despite what would appear to be overwhelming economic incentives to compete with one another for the scarce water. However, by carefully unraveling the complex dynamics that govern this ecosystem and applying the principles of feedback and networks discussed above, we can resolve this apparent anomaly. Oddly, the neighborhood feedbacks from the presence of damaging crop pests and diseases realign each farmer’s incentive to share water, and with such sharing, society is better off. Moreover, the newfound need for coordinated cropping opens up a niche for an elaborate religious institution with various shrines and temples tied to the irrigation systems.
We can also formulate an abstract model from which we can observe and understand the emergence and persistence of cooperation. We find that in a world red in tooth and claw, where competition can easily overwhelm the system, slight variations in competitive strategies provide a means by which cooperation can emerge. Cooperative agents develop a way to communicate so as to recognize one another. By doing so, they get the benefits of cooperation while minimizing losses when they encounter competitive agents. Through such a mechanism, cooperation can emerge and be sustained.
…
At the heart of complex adaptive systems are agents searching for better outcomes. With a few simplifications, the key aspects of this search behavior can be linked to elements of the algorithm above. Thus, agents in such systems are, unknowingly, performing a dance governed by a cosmic algorithm. Given this connection, we derive a new theorem of complex adaptive systems that embraces the magic inherent in the algorithm. This new theorem implies that as agents adapt in these complex systems, their adaptations are governed by probabilities tied to their underlying fitness. While agents are more likely to be found concentrating on the better solutions, there is always a (lower) chance that they will find themselves in suboptimal circumstances. This is a result that is at once both gratifying and humbling, as it suggests that while agents will often find the best outcomes, they will inevitably fail on occasion.
Complexity abounds. Exploring its core principles will take us on a journey across the scientific landscapes outlined above. It is a journey marked by awe, inspiration, and ultimately insights that are critical to our scientific understanding of the world around us and to our ability to survive when confronted by our most challenging problems. It is a journey about true places, where the maps are not always well formed, but they are suggestive enough to be of use given our innate desire and need to explore this frontier.