Issue 01 / Imagining Better Futures

Imagining Better Futures

HARNESSING STORIES TO CONTEND WITH RISK

Ed Finn

We are not very good at risk. We are better at stories. In an essay from 1998 titled “Risk Society,” Anthony Giddens responds to Ulrich Beck and argues that “the idea of ‘risk society’ might suggest a world which has become more hazardous, but this is not necessarily so. Rather, it is a society increasingly preoccupied with the future (and also with safety), which generates the notion of risk.”1 He points out that the risk society is a fundamentally modern phenomenon, emerging with the age of exploration and continuing to its intricate expressions today in the domains of finance, health, geopolitics, climate, and many more. Since the concept of the “risk society” emerged in the 1990s, our technical instruments for perceiving the future have grown more sophisticated: elaborate computer simulations, even quantum computing, backed by massive data troves. And yet the human brains contemplating these “futurescopes” are largely identical to those that navigated the Middle Ages, or the Bronze Age for that matter. To put it differently, we are not equipped to handle the math, to properly value or discount remote or unlikely catastrophes, or to effectively correlate lived experience and mathematical abstractions. And when humans have occasionally pursued the rational, mathematically prudent course, it has not been because of widespread buy-in to the results of technical analysis.
Instead, we make our decisions largely based on narratives. We are storytelling animals, constructing models of ourselves and reality on the fly. This reliance on stories can make us vulnerable to misinformation, groupthink, and poor judgment, but the narrative engines in our heads are also our best tools for contending with the future. As the authors of Homo Prospectus (2016) argue, echoing Giddens, we are cognitively as well as societally oriented towards the future. However, unlike Giddens, they see this prospective orientation as a deeper evolutionary adaptation: “The deadliest predator on the planet is not the strongest or the swiftest, but the one with the longest horizon of anticipation, homo sapiens.”2
Our unreliable memories and malleable interpretations of the present are best understood as narrative models, or humanistic simulations, of what could happen next. Like computational simulations, these stories about the world only work if we discard a lot of information for the sake of efficiency and consistency. But where computation depends on abstracting away from the particular, our narrative models gravitate towards it. Sometimes we can capture an entire history, a whole life trajectory, in a single gesture or phrase, using our innate capacities for narrative inference and extrapolation. Remember that very short story, sometimes attributed to Ernest Hemingway: “For sale: baby shoes, never worn.”3

Representing the Future

It seems that having someone literally represent the future in conversations has a powerful impact on our narrative capacity to navigate risk and trade-offs. Extinction Rebellion and other recent climate protest movements can serve the same purpose by using public action and nonviolent protest to voice the needs of the future.
A second way in which stories help us contend with the future is in their function as cognitive management systems for complex environments. A good story conveys foreground and background, ambiguity and dissonance, while maintaining a central arc of meaning and purpose. Also, the stories we tell over and over again manage to be both general and particular, transmitting archetypes and mythic plot structures while still remaining grounded in concrete details, like the color of the eye that twitched open in Victor Frankenstein’s laboratory. Hence, a well-crafted story is a microcosm; a narrative experience that unfurls into a world in the imagination of its audience. In this way, a story becomes a kind of simulation itself, one whose rules are often implied by the rules of genre and narrative convention rather than explicitly stated. The details and conditions of the story are sketched out through characters, description, and plot, though often many of the crucial actions of the simulation are left to the audience to create on their own.
The stories we tell about risk follow these same principles. Stories of luck and superstition persist not just for gamblers and athletes but also for NASA astronauts and flight engineers, who religiously bring peanuts to the Mission Control room every launch, among other idiosyncratic traditions. The most impactful conversations we have about climate change pivot on the experiences of individual people and places weathering superstorms or rising sea levels, rather than detailed statistics on average temperatures or atmospheric carbon dioxide. As political scientist Manjana Milkoreit has demonstrated, even climate policy experts and decision-makers often lack a clear positive vision of the future they are working towards, focusing instead on statistics or negative outcomes to be avoided.4
This absence of concrete positive visions for climate futures may explain why we continue to struggle to mobilize globally to contend with this existential threat.In the absence of compelling, factually informed hopeful visions of the future, fear and anxiety dominate. A paradixocal result is that the risk society has led to the emergence of vast industries specializing in risk narratives. From credit cards to pharmaceuticals, the marketers and pitchmen of risk rely on the narrative techniques of foreground and background, elision and analogy, to maximize the benefits and downplay the negatives of their products and services. Many of their stories are about specifically packaged risks: a car crash, a burglary, a disease. Rather than presenting a statistically grounded narrative about what actions might be most beneficial to the individual consumer (eating healthier food, for instance, to reduce the likelihood of heart disease), they often market solutions to unlikely but potentially serious risks, like a house being struck by lightning.
Even stepping back from the risk-themed caricatures presented in advertising, most of the stories we tell about risk are problematic at best. Our collective narrative response to the risk society has been to perfect the art of exquisite rationalization, spinning elaborate tales to justify our failure to make difficult decisions, take costly actions, or address uncomfortable realities. We continue to struggle with problems like food insecurity and extreme poverty even though scientific and logistical solutions to these problems are legion. Corporations like Fox and several other holdings of the Murdoch news empire have made a business out of terrifying and enraging their audiences, creating elaborate story-worlds around the risks of globalization, cultural pluralism, socialism, and so forth. The growing prominence of this narrative-driven, fear-based approach to risk has legitimized even more extreme risk stories, as QAnon conspiracy theories in the U.S. and the hateful rhetoric of white supremacists. Considering the increasingly perilous relationship we have with such risk stories, one might be forgiven for wondering if we’re really very good at stories after all.
The deeply embedded narrative systems in the brain are designed to work with materials that are directly available: memory and experience, observations from the senses, and our finely tuned social awareness of how our actions will affect the feelings of those around us. For example, researchers in Japan have shown that asking communities engaged in long-term planning to select a spokesperson for future generations leads to deliberations that are more favorable to long-term sustainability and equity.

Failures of Imagination

The missing term we need to introduce at this juncture is imagination. When we bring a narrative to life in our minds, we are using the cognitive faculty of imagination to conjure up the characters and settings. Our brains model the optic and auditory circuits of sense perception when we imagine a story, and they even engage the emotional system so we can feel the story as well as envision it.5 Imagination is like the holodeck of the mind, enabling us to conjure up an infinite variety of scenes and possibilities, involving not just novel places and scenarios but also identities and personae. For this reason, imagination is the cornerstone of our relationship to risk as well as narrative. We must imagine risks to make them real. We narrativize them, translating a statistic about air travel safety, for example, into a vignette about a plane crash or a safe landing.
To make clear, we are much better at imagining some risks than others. The slow, systemic disasters of climate change are hard to narrativize, and even the acute trauma of a forest fire can be rationalized away as a rare, catastrophic occurrence. Some of us place these misfortunes into the supernatural category of “acts of God” even though they are entirely predictable, and entirely predicted, by our models of a changing climate. Others exert narrative imagination to rationalize these risks within the comforting context of a status-quo reality. In any case, we embed those imaginative structures into laws, policies, and corporate structures, which implicitly and explicitly reiterate and reinforce particular narratives about what risks are real and how to contend with them. For example, the State of California subsidizes fire insurance to encourage rebuilding in the same areas where fires will inevitably return, contributing to a shared imaginary that underplays the long-term likelihood of future catastrophe because it would require Californians to reimagine too many things differently in the present, from political constituencies and public utilities to urban zoning.
There is a shorthand for our impoverished cultural relationship with risk: a failure of imagination. One modern classic example is the 9/11 terrorist attack and the findings of the subsequent 9/11 Commission Report, which uses the term "failure of imagination" repeatedly to describe the multiple coincident mistakes that led the US intelligence and security apparatus vulnerable to such a simple yet horrific form of attack.6 As with 9/11, the “failure of imagination” is not a vacuum, but a situation where the status quo and deep-rooted assumptions obscure facts and narratives that otherwise might be obvious.
In a similar fashion, the success of the Murdoch empire is a story of millions of people allowing their imaginations to be colonized (and monetized) by a corporation selling fear and anxiety sugar-coated as a narrative of resistance to fear and anxiety. The “economy of attention” that drives the multi-billion-dollar online advertising and consumer data markets are also fundamentally extractive of our imaginations. Zeynep Tufekci writes compellingly of how YouTube algorithms designed to maximize the number of hours a user spends on the site end up pushing ever more radical content to viewers, creating a catalyst for increasing polarization and warped perspectives.7 These systems leverage our predilection for risk narratives that are exciting, tempting, or salacious in order to create something akin to risk addiction, drawing us ever deeper into a landscape of anxiety and fear.
Furthermore, because we are bad at risk, we are also bad at distinguishing responsible narratives of risk from irresponsible ones. We are cognitively equipped to judge these stories as stories, but find it much harder to judge them for veracity and responsibility to principles of verifiability and transparency. When MIT researchers conducted a study after the 2016 election to understand why fake news circulates so freely online, their findings suggested that malicious bots were not the biggest threat to fact-based discourse.8 Actually, the problem was us: humans find fake news irresistible because it makes for more salacious, more compelling, more outrageous—that is to say, better—stories.

Imagining better futures

So what can we do? Imagination once again provides the answer. To craft better risk stories, we have to make the prudent pathways as compelling as the potential disasters that currently compel us. Rather than “doomscrolling” or scanning the headlines for news of fresh disasters, we have to cultivate our individual and collective powers to create and share stories of the futures we want. While dystopian visions and warnings will always have an important place in our shared imaginary about risk, we vastly underinvest our attention and energy in constructing positive visions of the future. It is helpful to think of risk narratives as a genre of stories. Like the mystery or the romance, the typical risk story we tell today has certain rules and expectations which express a causal model of reality. Sometimes these causal rules are borrowed from premodern narratives like fairy tales: being too ambitious or upsetting social norms will lead to a comeuppance; do not tempt fate by trying to improve your lot; obey your elders in all things. By recognizing the genre of risk, we can begin to ask what new rules and expectations we might want to use instead, and how we might disentangle our perceptions of the future from the genre we have already internalized.
From that starting point, we can begin to construct very different stories about the risks of the future that accommodate individual and community context. By redefining the individual as a co-creator of risk narratives, rather than as a pawn or powerless figure in a risk narrative created by someone else, we can change the stakes of the game. Most of us are used to telling such stories in more immediate contexts, such as deciding whether to run for the bus or wait for the next one. But we are rarely encouraged and poorly equipped to extrapolate beyond the familiar, to seriously consider, say, what life might be like in ten or fifteen years.
In his novel New York 2140, celebrated science fiction author Kim Stanley Robinson imagines such a future, one in which the process of revising the rules of the genre becomes a centerpiece of the plot.9 The New York of the title is akin to Venice, with skyscrapers intersected by canals as sea-level rise destabilizes real estate markets and coastal communities attempt to adapt. At the beginning of this novel, the characters are adapting as we might expect, creating new financial instruments to monetize the risks of climate change. But by the end, they have done something remarkable, nationalizing major banks and reconceptualizing the financial instruments of risk as ways to protect individuals from the worst impacts of climate instability, rather than to profit from them. In essence, Robinson creates a utopian vision of a new genre of risk.
To achieve something like what Robinson’s characters do in terms of reinventing the genre of risk requires both imagination and efficacy. We need to cultivate individual and collective imaginative capacity to identify risks and obstacles, possible solutions, and to construct convincing narratives that bridge between the present and a possible future in which that obstacle has been overcome.10 Doing this effectively requires a sense of self-efficacy, that the individual has the basic knowledge and cognitive skills to do this imaginative work, to assess the real validity of risks in a local and personal context, and to act on the results of such a narrative simulation.


Ed Finn is the founding director of the Center for Science and the Imagination at Arizona State University, where he is an associate professor in the School for the Future of Innovation in Society and the School of Arts, Media and Engineering. He also serves as the academic director of Future Tense, a partnership between ASU, New America and Slate Magazine, and a co-director of Emerge, an annual festival of art, ideas and the future. Ed’s research and teaching explore the workings of imagination, digital culture, creative collaboration, and the intersection of the humanities, arts and sciences. He is the author of What Algorithms Want: Imagination in the Age of Computing (MIT Press, spring 2017) and co-editor of Future Tense Fiction (Unnamed Press, 2019), Frankenstein: Annotated for Scientists, Engineers and Creators of All Kinds (MIT Press, 2017) and Hieroglyph: Stories and Visions for a Better Future (William Morrow, 2014), among other books. He completed his PhD in English and American Literature at Stanford University in 2011 and his bachelor’s degree at Princeton University in 2002. Before graduate school, Ed worked as a journalist at Time, Slate, and Popular Science.