Resilience
eBook - ePub

Resilience

Why Things Bounce Back

  1. 336 pages
  2. English
  3. ePUB (mobile friendly)
  4. Available on iOS & Android
eBook - ePub

Resilience

Why Things Bounce Back

Book details
Book preview
Table of contents
Citations

About This Book

Discover a powerful new lens for viewing the world with fascinating implications for our companies, economies, societies, and planet as a whole. What causes one system to break down and another to rebound? Are we merely subject to the whim of forces beyond our control? Or, in the face of constant disruption, can we build better shock absorbers—for ourselves, our communities, our economies, and for the planet as a whole? Reporting firsthand from the coral reefs of Palau to the back streets of Palestine, Andrew Zolli and Ann Marie Healy relate breakthrough scientific discoveries, pioneering social and ecological innovations, and important new approaches to constructing a more resilient world. Zolli and Healy show how this new concept of resilience is a powerful lens through which we can assess major issues afresh: from business planning to social develop­ment, from urban planning to national energy security—circumstances that affect us all. Provocative, optimistic, and eye-opening, Resilience sheds light on why some systems, people, and communities fall apart in the face of disruption and, ultimately, how they can learn to bounce back.

Frequently asked questions

Simply head over to the account section in settings and click on “Cancel Subscription” - it’s as simple as that. After you cancel, your membership will stay active for the remainder of the time you’ve paid for. Learn more here.
At the moment all of our mobile-responsive ePub books are available to download via the app. Most of our PDFs are also available to download and we're working on making the final remaining ones downloadable now. Learn more here.
Both plans give you full access to the library and all of Perlego’s features. The only differences are the price and subscription period: With the annual plan you’ll save around 30% compared to 12 months on the monthly plan.
We are an online textbook subscription service, where you can get access to an entire online library for less than the price of a single book per month. With over 1 million books across 1000+ topics, we’ve got you covered! Learn more here.
Look out for the read-aloud symbol on your next book to see if you can listen to it. The read-aloud tool reads text aloud for you, highlighting the text as it is being read. You can pause it, speed it up and slow it down. Learn more here.
Yes, you can access Resilience by Andrew Zolli,Ann Marie Healy in PDF and/or ePUB format, as well as other popular books in Business & Decision Making. We have over one million books available in our catalogue for you to explore.

Information

Publisher
Free Press
Year
2012
ISBN
9781451683844

1

ROBUST, YET FRAGILE
This is a book about why things bounce back.
In the right circumstances, all kinds of things do: people and communities, businesses and institutions, economies and ecosystems. The place in which you live, the company at which you work, and even, without realizing it, you yourself: Each is resilient, or not, in its own way. And each, in its own way, illuminates one corner of a common reservoir of shared themes and principles. By understanding and embracing these patterns of resilience, we can create a sturdier world, and sturdier selves to live in it.
But before we can understand how things knit themselves back together, we have to understand why they fall apart in the first place. So let’s start with a thought experiment:
Imagine, for a moment, that you are a tree farmer, planting a new crop on a vacant plot of earth. Like all farmers, to get the most use out of your land, you will have to contend with any number of foreseeable difficulties: bad weather, drought, changing prices for your product, and, perhaps especially, forest fires.
Fortunately, there are some steps you can take to protect your tree farm against these potential dangers. To lower the risk of fire, for example, you can start by planting your saplings at wide but regular intervals, perhaps 10 meters apart. Then, when your trees mature, none of their canopies will touch, and a spark will have trouble spreading from tree to tree. This is a fairly safe but fairly inefficient planting strategy: Your whole crop will be less likely to burn to the ground, but you will hardly be getting the most productive use out of your land.
Now imagine that, to increase the yield of your farm, you randomly start planting saplings in between this grid of regularly spaced trees. By definition, these new additions will be much closer to their neighbors, and their canopies will grow to touch those of the trees surrounding them. This will increase your yield, but also your risk: If a fire ignites one of these randomly inserted trees, or an adjoining one, it will have a much higher chance of spreading through its leaves and branches to the connected cluster of trees around it.
At a certain point, continuing to randomly insert trees into the grid will result in all of the trees’ canopies touching in a dense megacluster. (This is often achieved when about 60 percent of the available space is filled in.) This, in contrast to your initial strategy, is a tremendously efficient design, at least from a planting and land-use perspective, but it’s also very risky: A small spark might have disastrous consequences for the entire crop.
Of course, being a sophisticated arborist, you’re unlikely to plant your trees in either a sparse grid or a dense megacluster. Instead, you choose to plant groves of densely planted trees interspersed with roads, which not only provide you access to the interior of your property but act as firebreaks, separating one part of the crop from another and insuring the whole from a fire in one of its parts.
These roads are not free: By reducing the area usable for planting, each imposes a cost, so you must be careful where you put them; adding too many roads is just as bad as adding too few. But eventually, with lots of trial and error, and taking into account local variations in the weather, soil, and geography, you may alight upon a near-perfect design for your particular plot—one that maximizes the density of trees while making smart and efficient use of roads to access them. Your tree farm’s exceptional design will easily withstand the occasional fire, without ever burning entirely to the ground, all the while providing you with seasonally variable but not wildly gyrating timber returns.
Imagine your horror, then, when one day you discover that much of your perfectly designed plot has been infested by an invasive foreign beetle. This tiny pest, native to another geographic region entirely, stowed away on a shipment from an overseas supplier, then hitchhiked its way to the heart of your tree farm by clinging to your boot. Once inside, it exploited the very features of your design—your carefully placed roads—that were intended to insure against the risks you thought you’d confront.
It is at this moment in our thought experiment that you have painfully discovered that, in systems terms, your tree farm design is robust-yet-fragile (or RYF), a term coined by California Institute of Technology research scientist John Doyle to describe complex systems that are resilient in the face of anticipated dangers (in this case forest fires) but highly susceptible to unanticipated threats (in this case exotic beetles).
On any given day, our news media is filled with real-world versions of this story. Many of the world’s most critical systems—from coral reefs and communities to businesses and financial markets—have similar robust-yet-fragile dynamics; they’re able to deal capably with a range of normal disruptions but fail spectacularly in the face of rare, unanticipated ones.
As in our tree-planting example, all RYF systems involve critical trade-offs, between efficiency and fragility on the one hand and inefficiency and robustness on the other. A perfectly efficient system, like the densely planted tree farm, is also the most susceptible to calamity; a perfectly robust system, like the sparsely planted tree farm, is too inefficient to be useful. Through countless iterations in their design (whether the designer of a system is a human being or the relentless process of natural selection), RYF systems eventually find a midpoint between these two extremes—an equilibrium that, like our roads-and-groves tree farm design, balances the efficiency and robustness trade-offs particular to the given circumstance.
The complexity of the resulting compensatory system—the network of the roads and groves in the example above—is a by-product of that balancing act. Paradoxically, over time, as the complexity of that compensatory system grows, it becomes a source of fragility itself—approaching a tipping point where even a small disturbance, if it occurs in the right place, can bring the system to its knees. No RYF design can therefore ever be “perfect,” because each robustness strategy pursued creates a mirror-image (albeit rare) fragility. In an RYF system, the possibility of “black swans”—low-probability but high-impact events—is engineered in.
The Internet presents a perfect example of this robust-yet-fragile dynamic in action. From its inception as a U.S. military funded project in the 1960s, the Internet was designed to solve a particular problem above all else: to ensure the continuity of communications in the face of disaster. Military leaders at the time were concerned that a preemptive nuclear attack by the Soviets on U.S. telecommunications hubs could disrupt the chain of command—and that their own counterstrike orders might never make it from their command bunkers to their intended recipients in the missile silos of North Dakota. So they asked the Internet’s original engineers to design a system that could sense and automatically divert traffic around the inevitable equipment failures that would accompany any such attack.
The Internet achieves this feat in a simple yet ingenious way: It breaks up every email, web page, and video we transmit into packets of information and forwards them through a labyrinthine network of routers—specialized network computers that are typically redundantly connected to more than one other node on the network. Each router contains a regularly updated routing table, similar to a local train schedule. When a packet of data arrives at a router, this table is consulted and the packet is forwarded in the general direction of its destination. If the best pathway is blocked, congested, or damaged, the routing table is updated accordingly and the packet is diverted along an alternative pathway, where it will meet the next router in its journey, and the process will repeat. A packet containing a typical web search may traverse dozens of Internet routers and links—and be diverted away from multiple congestion points or offline computers—on the seemingly instantaneous trip between your computer and your favorite website.
The highly distributed nature of the routing system ensures that if a malicious hacker were to disrupt a single, randomly chosen computer on the Internet, or even physically blow it up, the network itself would be unlikely to be affected. The routing tables of nearby routers would simply be updated and would send network traffic around the damaged machine. In this way, it’s designed to be robust in the face of the anticipated threat of equipment failure.
However, the modern Internet is extremely vulnerable to a form of attack that was unanticipated when it was first invented: the malicious exploitation of the network’s open architecture—not to route around damage, but to engorge it with extra, useless information. This is what Internet spammers, computer worms and viruses, botnets, and distributed denial of service attacks do: They flood the network with empty packets of information, often from multiple sources at once. These deluges hijack otherwise beneficial features of the network to congest the system and bring a particular computer, central hub, or even the whole network to a standstill.
These strategies were perfectly illustrated in late 2010, when the secrets-revealing organization WikiLeaks began to divulge its trove of secret U.S. State Department cables. To protect the organization from anticipated retaliation by the U.S. government, WikiLeaks and its supporters made copies of its matĂ©riel—in the form of an encrypted insurance file containing possibly more damaging information—available on thousands of servers across the network. This was far more than the United States could possibly interdict, even if it had had the technical capability and legal authority to do so (which it didn’t). Meanwhile, an unaffiliated, loose-knit band of WikiLeaks supporters calling itself Anonymous initiated distributed denial-of-service attacks on the websites of companies that had cut ties with WikiLeaks, briefly knocking the sites of companies like PayPal and MasterCard off line in coordinated cyberprotests.
Both organizations, WikiLeaks and Anonymous, harnessed aspects of the Internet—redundancy and openness—that had once protected the network from the (now-archaic) danger that had motivated its invention in the first place: the threat of a strike by Soviet missiles. Four decades later, their unconventional attacks (at least from the U.S. government’s perspective) utilized the very features of the network originally designed to prevent a more conventional one. In the process, the attacking organizations had proven themselves highly resilient: To take down WikiLeaks and stop the assault of Anonymous, the U.S. government would have had to take down the Internet itself, an impossible task.
Doyle points out a dynamic quite similar to this one at work in the human immune system. “Think of the illnesses that plague contemporary human beings: obesity, diabetes, cancer, and autoimmune diseases. These illnesses are malignant expressions of critical control processes of the human body—things like fat accumulation, modulation of our insulin resistance, tissue regeneration, and inflammation that are so basic that most of the time we don’t even think about them. These control processes evolved to serve our hunter-gatherer ancestors, who had to store energy between long periods without eating and had to maintain their glucose levels in their brain while fueling their muscles. Such biological processes conferred great robustness on them, but in today’s era of high-calorie, junk-food-saturated diets, these very same essential systems are hijacked to promote illness and decay.”
To confront the threat of hijacking, Internet engineers add sophisticated software filters to their routers, which analyze incoming and outgoing packets for telltale signs of malevolent intent. Corporations and individuals install firewall and antivirus software at every level of the network’s organization, from the centralized backbones right down to personal laptops. Internet service providers add vast additional capacity to ensure that the network continues to function in the face of such onslaughts.
This collective effort to suffuse the system with distributed intelligence and redundancy may succeed, to varying degrees, at keeping some of these anticipated threats at bay. Yet, even with such actions, potential fragility has not been eliminated; it’s simply been moved, to sprout again from another unforeseeable future direction. Worse, like all RYF systems, over time the complexity of these compensatory systems—antivirus software, firewalls—swell until they become a source of potential fragility themselves, as anyone who’s ever had an important email accidentally caught in a spam filter knows all too well.
Along the way, paradoxically, the very fact that a robust-yet-fragile system continues to handle commonplace disturbances successfully will often mask an intrinsic fragility at its core, until—surprise!—a tipping point is catastrophically crossed. In the run-up to such an event, everything appears fine, with the system capably absorbing even severe but anticipated disruptions as it was intended to do. The very fact that the system continues to perform in this way conveys a sense of safety. The Internet, for example, continues to function in the face of inevitable equipment failures; our bodies metabolize yet another fast-food meal without going into insulin shock; businesses deal with intermittent booms and busts; the global economy handles shocks of various kinds. And then the critical threshold is breached, often by a stimulus that is itself rather modest, and all hell breaks loose.
When such failures arrive, many people are shocked to discover that these vastly consequential systems have no fallback mechanisms, say, for resolving the bankruptcy of a major financial institution or for the capping of a deep-sea spill.
And in the wake of these catastrophes, we end up resorting to simplified, moralistic narratives, featuring cartoon-like villains, to explain why they happened. In reality, such failures are more often the result of the almost imperceptible accretion of a thousand small, highly distributed decisions—each so narrow in scope as to seem innocuous—that slowly erode a system’s buffer zones and adaptive capacity. A safety auditor cuts one of his targets a break and looks the other way; a politician pressures a regulator on behalf of a constituent for the reduction of a fine; a manager looks to bolster her output by pushing her team to work a couple of extra shifts; a corporate leader decides to put off necessary investments for the future to make the quarterly numbers.
None of these actors is aware of the aggregate impact of his or her choices, as the margin of error imperceptibly narrows and the system they are working within inexorably becomes more brittle. Each, with an imperfect understanding of the whole, is acting rationally, responding to strong social incentives to serve a friend, a constituent, a shareholder in ways that have a significant individual benefit and a low systemic risk. But over time, their decisions slowly change the cultural norms of the system. The lack of consequences stemming from unsafe choices makes higher-risk choices and behaviors seem acceptable. What was the once-in-a-while exception becomes routine. Those who argue on behalf of the older way of doing things are perceived to be fools, paranoids, or party poopers, hopelessly out of touch with the new reality, or worse, enemies of growth who must be silenced. The system as a whole edges silently closer to possible catastrophe, displaying what systems scientists refer to as “self-organized criticality”—moving closer to a critical threshold.
This dynamic—and our first hints about what we might do to improve the resilience of such systems—can be seen in two very different robust-yet-fragile systems that, with new analytical tools, are beginning to powerfully illuminate each other: coral reef ecology and global finance.
OF FISHERIES AND FINANCE
In the 1950s, Jamaica’s coral reefs were a thriving picture-postcard example of a Caribbean reef, supporting abundant mixtures of brilliantly colored sponges and feather-shaped octocorals sprouting up from the hard coral base. The reefs were popular habitats for hundreds of varieties of fish, including large predatory species such as sharks, snappers, gro...

Table of contents

  1. Cover
  2. Title Page
  3. Dedication
  4. Introduction: The Resilience Imperative
  5. Chapter 1: Robust, Yet Fragile
  6. Chapter 2: Sense, Scale, Swarm
  7. Chapter 3: The Power of Clusters
  8. Chapter 4: The Resilient Mind
  9. Chapter 5: Cooperation When it Counts
  10. Chapter 6: Cognitive Diversity
  11. Chapter 7: Communities that Bounce Back
  12. Chapter 8: The Translational Leader
  13. Chapter 9: Bringing Resilience Home
  14. Acknowledgments
  15. Organizations We Admire
  16. Notes
  17. Index
  18. About the Authors
  19. Copyright