New Times,
New Thinking.

  1. Long reads
29 April 2020updated 04 May 2020 9:28am

How to handle an uncertain future

Why accepting the unknown can be a radical proposition.

By Simon Kuper

We must expect to be hit by an epidemic of an infectious disease resulting from a virus which does not yet exist,” write John Kay and Mervyn King in their book about how to handle an unknowable future, which was published in March. Radical Uncertainty helps us think about our current predicament. It’s also a poignantly Oedipal attack on the terribly flawed economics profession that spawned these two authors. 

Six months ago, if you were studying the possibility of a global pandemic, you would have noted that there hadn’t been one of the scale of the Spanish flu in the West in a century. You might have calculated that the probability of one was so negligible it wasn’t worth preparing for. Indeed, the Trump administration closed down the pandemic response team in the White House in 2018, and eliminated the post of a US epidemiologist embedded in China’s Center for Disease Control and Prevention. However, other countries did prepare. Angela Merkel raised the possibility of a pandemic in one of her first private talks with Donald Trump. 

Coronavirus provides a case study of how to prepare for the future when models and data aren’t much help. Kay and King come from a world of models and data. Professional economists for 50 years, they first wrote a book together (The British Tax System) in 1978. They were raised largely on the economic orthodoxy of that era: the notion that people, businesses and governments are cold, rational decision-makers who plan for the future by weighing all probabilities and aiming for optimal outcomes. The economist Milton Friedman, high priest of this orthodoxy, said: “We may treat people as if they assigned numerical probabilities to every conceivable event.” For Friedman’s Chicago School, everybody was “homo economicus”, a human calculator making rational, self- interested plans. 

This cocky view of humanity reached its apex between 1990 and 2007. Communism had failed, the market worked, and we had learned to manage the economic future. Gordon Brown proclaimed “an end to boom and bust”. The Nobel laureate Robert Lucas told the American Economic Association in his 2003 presidential address: “Macroeconomics… has succeeded: its central problem of depression prevention has been solved, for all practical purposes, and in fact has been solved for many decades.” 

In this period Kay and King rose to professional esteem and some power: King became governor of the Bank of England in 2003. Then the financial crisis of 2008 happened on his watch. The predictive models used by investment banks – which valued sub-prime American mortgages as safe assets – crumbled overnight. Central banks realised that their forecasting models left out the financial system. “An economic crisis originating in the financial system was therefore impossible,” the authors note wryly. 

In late middle age, at the pinnacle of their careers, these men had to discard any remaining fidelity to the world-view they had been taught. As they touchingly concede: “Over 40 years, the authors have watched the bright optimism of a new, rigorous approach to economics – an optimism which they shared – dissolve into the failures of prediction and analysis which were seen in the global financial crisis of 2007-08. And it is the pervasive nature of radical insecurity which is the source of the problem.” 

What went wrong, they explain, is that the Friedman-esque prescription of planning for the future by drawing up models, filling them with numbers, inputting probabilities, and making a self-interested decision only works in a few, limited circumstances. It’s useful if you’re playing poker: you know which cards are in the deck, and the probability of each coming up. It’s useful if you are an insurer using big data to assign probabilities as to whether a particular person will have a heart attack, or a car accident. Data-filled models are also good for predicting tomorrow’s weather, or even sending a rocket to Mercury. That’s because modelling works when a system is well-understood (a pack of cards, risk factors for heart attacks) and when it doesn’t change much over time. 

Give a gift subscription to the New Statesman this Christmas from just £49

But models aren’t good at predicting any system that changes – such as almost anything involving humans. You cannot plausibly model your pension because you don’t know how many years you will work, how much you will earn, how the economy and stock market will change during your career, or when you will retire and die. 

Models are also little use in statecraft. A model couldn’t tell the US how to deal with Saddam Hussein’s Iraq, because he ran a unique regime in a unique global context. In fact, every situation in human affairs is unique. Loose analogies with past situations might tell you something, but they can also mislead you: George W Bush went wrong partly by interpreting Saddam’s Iraq as Hitler’s Germany and applying Churchill’s playbook. There are no fixed laws of society. 

We have also discovered that models are terrible at producing economic forecasts. That’s mostly because unprecedented things that don’t appear in any model – novel ways of packaging mortgages, an animal transmitting a virus in a Wuhan wet market – keep happening. The International Monetary Fund’s Spring World Economic Outlook predicted zero of the 207 recessions that occurred through 2016. The present recession has hit us as unexpectedly as the one in 2008. Given our inability to forecast, the Remain campaign in 2016 should have known better than to lead with an economic forecast: that Brexit would produce a “do-it-yourself recession” and cost each household an implausibly precise £4,300 a year. 

The most potent critique of homo economicus has come from behavioural economists, led by Amos Tversky and Daniel Kahneman. They argue that people aren’t rational decision-makers at all. In their view, we cannot successfully pursue our own self-interest, because human thinking is distorted by all sorts of biases. For instance, we are loss-averse: we attach more importance to a potential loss than to a potential equivalent gain. We are overconfident, we become fixated on the first number mentioned in a negotiation (a phenomenon called “anchoring”), and so on. Behavioural economics had arguably killed homo economicus before Kay and King (the Ks) entered this debate.

But the Ks have no time for the behaviourists either. Radical Uncertainty argues that it’s wrong to think of humans as individual agents trying and failing to maximise their future self-interest. Rather, we have a uniquely human set of tools for thinking about the future. For a start, we are social animals. We think not as individuals but in groups, whether that’s inside a company or a tribe or a government. We come to decisions by debating and sharing information with others. To quote the German playwright Bertolt Brecht: the smallest human unit is two people. It follows that we are altruists rather than Friedman-esque self-seekers. In the words of Nobel laureate Richard Thaler, “humans” are not “econs”. 

The phrase “storytelling is universal” seems to feature in most recent books, and the Ks use it too. Their point is that we try to understand events not by using data and models, like computers do, but through stories. Even Jeff Bezos, chief executive of data-mining Amazon, starts meetings by making senior executives spend half an hour silently reading a narrative memo about some aspect of the company. Bezos says: “The thing I have noticed is that when the anecdotes and the data disagree, the anecdotes are usually right. There’s something wrong with the way you are measuring it.”

The Ks conclude from all this: “Humans excel at finding ways to cope with open-ended mysteries… We are not defective versions of computers… but human beings with individual and collective intelligence evolved over millennia.” But that seems too kind. Often, humans are rubbish at coping with open-ended mysteries, and deal with them by concocting conspiracy theories or picking scapegoats or voting for Trump.

The Ks do have some useful advice on decision-making under uncertainty, much of it gleaned from history. One president who improved his decision-making the hard way was John F Kennedy. In 1961 he authorised the disastrous Bay of Pigs invasion of Cuba by anti-Castro exiles. With hindsight, Kennedy realised that he’d presided over meetings in which officials were afraid to voice doubts about the mission. Invasion was obviously his favoured option, so everybody clustered around it. This was what came to be labelled “groupthink”.

When the Cuban missile crisis erupted a year later, Kennedy handled it differently. He solicited criticism of proposed policy. He also kept in mind a worst-case scenario, “the awful unpredictability of escalation”, which the military seemed to underestimate. He avoided backing the Soviet leader Nikita Khrushchev into a corner. Instead of trying to optimise, he steered towards a compromise: the Soviets pulled their missiles out of Cuba in exchange for a secret American promise to take theirs out of Turkey. 

The Ks advise against aiming for the optimal outcome, partly because it’s impossible to identify and partly because the smarter path is to pursue a good-enough outcome while trying to minimise the risk of catastrophe. They have other tips for decision-making: treat every human situation as a one-off. Don’t enter it with strong prior beliefs, like a Marx or a Friedman or a Jacob Rees-Mogg, because a fixed theory can lead you to overlook the particulars of the case. Instead of reaching at once for data sets, ask, “What’s going on here?” 

More recommendations: in human affairs, don’t attach any meaning to precise probabilities. But do use numbers, stories and simple models and rival theories – any crutch for thinking can be helpful. Solicit criticism. Be quick to say, “I don’t know.” 

If you’re a leader, be Obama rather than Trump: consult specialists, encourage them to speak freely, but don’t ask for certainty. Effective leaders understand that they have “superior responsibility” rather than “superior wisdom”, write the Ks. Imagine how things might go horribly wrong – an unintentional nuclear war, say, or a pandemic. Choose strategies that are robust even when the future turns out unexpectedly: part of the beauty of George Kennan’s “containment” strategy in the Cold War was that it could be ratcheted up when necessary, but would work if the Soviets didn’t seek war.

All these are sound recommendations. However, they aren’t very surprising. Most people know even without reading Radical Uncertainty that the future is unknowable, consultation is good, groupthink bad, and that models aren’t the real world. 

The authors admit in their preface that discussing their ideas with friends and colleagues, they got very different reactions from “general readers” and “specialists”. The former tended to “find the concept of radical uncertainty natural and indeed obvious… Many people who have been trained in economics, statistics, or decision theory, however, find it difficult to accept the centrality of radical uncertainty.” 

On this issue, ordinary people are probably cleverer than economists. 

Radical Uncertainty is jam-packed with erudition, sometimes too much so. The stories about everyone from Max Planck to David Beckham are well-told, but they risk turning the book into a grab-bag of everything the Ks have picked up in their combined century of professional endeavour, right down to a potted history of dentistry. There is a lot of value here. But perhaps the book works best as a modern version of those takedowns of communism that recovering ex-Marxists wrote in the 1930s and 1940s: intelligent people trying to come to terms with the fact that the dogmas they had swallowed weren’t actually true. 

With hindsight, the Ks were unlucky to come of age in an era of intellectual overconfidence, when the future seemed knowable. The generation whose formative economic experiences have been the crises of 2008 and 2020 may never be confident of anything again. 

Simon Kuper is an author and Financial Times columnist 

Radical Uncertainty
John Kay and Mervyn King 
The Bridge Street Press, 544pp, £25

Content from our partners
Building Britain’s water security
How to solve the teaching crisis
Pitching in to support grassroots football

This article appears in the 29 Apr 2020 issue of the New Statesman, The second wave