New Times,
New Thinking.

  1. Long reads
25 September 2014updated 23 Oct 2014 10:11am

Apocalypse soon: the scientists preparing for the end times

A growing community of scientists, philosophers and tech billionaires believe we need to start thinking seriously about the threat of human extinction.

By Sophie McBain

Illustration: Darrel Rees/Heart

The men were too absorbed in their work to notice my arrival at first. Three walls of the conference room held whiteboards densely filled with algebra and scribbled diagrams. One man jumped up to sketch another graph, and three colleagues crowded around to examine it more closely. Their urgency surprised me, though it probably shouldn’t have. These academics were debating what they believe could be one of the greatest threats to mankind – could superintelligent computers wipe us all out?

I was visiting the Future of Humanity Institute, a research department at Oxford University founded in 2005 to study the “big-picture questions” of human life. One of its main areas of research is existential risk. The physicists, philosophers, biologists, economists, computer scientists and mathematicians of the institute are students of the apocalypse.

Predictions of the end of history are as old as history itself, but the 21st century poses new threats. The development of nuclear weapons marked the first time that we had the technology to end all human life. Since then, advances in synthetic biology and nanotechnology have increased the potential for human beings to do catastrophic harm by accident or through deliberate, criminal intent.

Select and enter your email address Your weekly guide to the best writing on ideas, politics, books and culture every Saturday. The best way to sign up for The Saturday Read is via saturdayread.substack.com The New Statesman's quick and essential guide to the news and politics of the day. The best way to sign up for Morning Call is via morningcall.substack.com
Visit our privacy Policy for more information about our services, how Progressive Media Investments may use, process and share your personal data, including information on your rights in respect of your personal data and how you can unsubscribe from future marketing communications.
THANK YOU

In July this year, long-forgotten vials of smallpox – a virus believed to be “dead” – were discovered at a research centre near Washington, DC. Now imagine some similar incident in the future, but involving an artificially generated killer virus or nanoweapons. Some of these dangers are closer than we might care to imagine. When Syrian hackers sent a message from the Associated Press Twitter account that there had been an attack on the White House, the Standard & Poor’s 500 stock market briefly fell by $136bn. What unthinkable chaos would be unleashed if someone found a way to empty people’s bank accounts?

While previous doomsayers have relied on religion or superstition, the researchers at the Future of Humanity Institute want to apply scientific rigour to understanding apocalyptic outcomes. How likely are they? Can the risks be mitigated? And how should we weigh up the needs of future generations against our own?

The FHI was founded nine years ago by Nick Bostrom, a Swedish philosopher, when he was 32. Bostrom is one of the leading figures in this small but expanding field of study. It was the first organisation of its kind in the UK, and Bostrom is also an adviser on the country’s second: the Centre for the Study of Existential Risk at Cambridge University, which was launched in 2012. There are a few similar research bodies in the US, too: in May, the Future of Life Institute opened in Boston at MIT, joining the Machine Intelligence Research Institute in Berkeley, California.

“We’re getting these more and more powerful technologies that we can use to have more and more wide-ranging impacts on the world and ourselves, and our level of wisdom seems to be growing more slowly. It’s a bit like a child who’s getting their hands on a loaded pistol – they should be playing with rattles or toy soldiers,” Bostrom tells me when we meet in his sunlit office at the FHI, surrounded by yet more whiteboards. “As a species, we’re giving ourselves access to technologies that should really have a higher maturity level. We don’t have an option – we’re going to get these technologies. So we just have to mature more rapidly.”

I’d first met Bostrom in London a month earlier, at the launch of his most recent book, Superintelligence: Paths, Dangers, Strategies. He had arrived late. “Our speaker on superintelligence has got lost,” the chair joked. It was a Thursday lunchtime but the auditorium at the RSA on the Strand was full. I was sitting next to a man in his early twenties with a thick beard who had lent in to ask, “Have you seen him argue that we’re almost certainly brains in vats? It sounds so out-there, but when he does it it’s so cool!” He looked star-struck when Bostrom eventually bounded on stage.

Dressed in a checked shirt, stripy socks and tortoiseshell glasses, Bostrom rushed through his presentation, guided by some incongruously retro-looking PowerPoint slides. The consensus among experts in artificial intelligence (AI) is that they will develop a computer with human-level intelligence in the next 50 years, he said. Once they have succeeded in doing this, it might not take so long to develop machines smarter than we are. This new superintelligence would be extremely powerful, and may be hard to control. It would, for instance, be better at computer programming than human beings, and so could improve its own capabilities faster than scientists could. We may witness a “superintelligence explosion”
as computers begin improving themselves at an alarming rate.

If we handle it well, the development of superintelligence might be one of the best things ever to happen to humanity. These smart machines could tackle all the problems we are too stupid to solve. But it could also go horribly wrong. Computers may not share our values and social understanding. Few human beings set out to harm gorillas deliberately, Bostrom pointed out, but because we are cleverer, society is organised around our needs, not those of gorillas. In a future controlled by ultra-smart machines, we could well be the gorillas.

After the book launch, the publishers invited me to join them and Bostrom for a drink. We drank white wine, and Bostrom asked for green tea. Over a bowl of wasabi peanuts, he mentioned casually that he likes to download lectures and listen to them at three times the normal speed while he exercises. “I have an app that adjusts the pitch so it’s not like a Mickey Mouse voice,” he explained, assuming perhaps that this was the reason for my surprised expression. I sensed that he was quite keen to leave us to our wine. “Nick is the most focused person I know,” a colleague later told me.

****

Bostrom hated school when he was growing up in Helsingborg, a coastal town in Sweden. Then, when he was 15, he stumbled on the philosophy section of his local library and began reading the great German philosophers: Nietzsche, Schopenhauer, Kant. “I discovered there was this wider life of the mind I had been oblivious to before then, and those big gates flung open and I had this sense of having lost time, because I had lost the first 15 years of my life,” he told me. “I had this sense of urgency that meant I knew I couldn’t lose more years, or it could be too late to amount to anything.”

There was no real “concept” of existential risk when Bostrom was a graduate student in London in the mid-1990s. He completed a PhD in philosophy at the London School of Economics while also studying computational neuroscience and astrophysics at King’s College London. “People were talking about human extinction but not about ways of thinking about permanently destroying our future,” he says.

Yet he began to meet people through the mailing lists of early websites who were also drawn to the ideas that were increasingly preoccupying him. They often called themselves “transhumanists” – referring to an intellectual movement interested in the transformation of the human condition through technology – and they were, he concedes, “mainly crackpots”.

In 2000 he became a lecturer in philosophy at Yale and then, two years later, he returned to the UK as a postdoctoral fellow at the British Academy. He planned to pursue his study of existential risk in tandem with a philosophy teaching post, until he met the futurologist and multibillionaire computer scientist James Martin, who was interested in his work. In 2005 Martin donated $150m to the University of Oxford to set up the Oxford Martin School, and the FHI was one of the first research bodies to receive funding through the school. It can be a challenge to be taken seriously in a field that brushes so close to science fiction. “The danger is it can deter serious researchers from the field for fear of being mistaken or associated with crackpots,” Bostrom explained. A “univer­sity with a more marginal reputation” might have been less confident about funding such radical work.

Nevertheless, the FHI has expanded since then to include 18 full-time research staff, drawn from a wide range of disciplines. I spoke to Daniel Dewey, a 27-year-old who last year left his job as a software engineer at Google to join the FHI as a research fellow studying machine superintelligence. A few of his colleagues at Google had introduced him to research on the emerging risks of AI, and he began reading about the subject in his spare time. He came across a problem related to the safety of AIs that he couldn’t solve, and it became the hook. “I was thinking about it all the time,” Dewey recalled.

One of the concerns expressed by those studying artificial intelligence is that machines – because they lack our cultural, emotional and social intuition – might adopt dangerous methods to pursue their goals. Bostrom sometimes uses the example of a superintelligent paper-clip maker which works out that it could create more paper clips by extracting carbon atoms from human bodies. “You at least want to be able to say, ‘I want you to achieve this simple goal and not do anything else that would have a dramatic impact on the world,’ ” Dewey explained.

Unfortunately, it turns out that it is very difficult to meet this simple request in practice. Dewey emailed Toby Ord, an Australian philosopher who also works at the FHI, who replied that he didn’t know the answer, either, but if Dewey came to Oxford they could discuss it. He did, and he soon decided that he might be able to “make a difference” at the institute. So he quit Google.

Many of the FHI researchers seem motivated by a strong sense of moral purpose. Ord is also a founder of Giving What We Can, an organisation whose members pledge 10 per cent of their income to help tackle poverty. Ord gives more than this: anything he earns above £20,000, he donates to charity. Despite his modest salary, he plans to give away £1m over his lifetime.

Ord lives in Oxford with his wife, a medical doctor who has also signed up to the giving pledge, and baby daughter. “I’m living off the median income in the UK, so I can’t complain,” he told me, but they live frugally. The couple dine out no more than once a month, and he treats himself to one coffee a week. Ord sees a natural connection between this and his work at the FHI. “I was focusing on how to think carefully and apply academic scholarship to how I give in my life and helping others to give, too. So when it comes to existential risk I’m interested in the idea that another way of helping people is to figure out how to help future generations,” he said.

Ord is working on a report for the government’s chief scientific adviser on risk and emerging technology. Most researchers at the FHI and at the Centre for the Study of Existential Risk hope that such analysis will gradually be integrated into national policymaking. But, for now, both institutions are surviving on donations from individual philanthropists.

****

One of the most generous donors to scientists working in the discipline is Jaan Tallinn, the 42-year-old Estonian computer whizz and co-founder of Skype and Kazaa, a file-sharing program. He estimates that he has donated “a couple of million dollars” to five research groups in the US and three in the UK (the FHI, the CSER and 80,000 Hours, an organisation promoting effective philanthropy, which also has a close interest in existential risk). This year he has given away $800,000 (£480,000).

His involvement in the founding of the CSER came after a chance encounter in a taxi in Copenhagen with Huw Price, professor of philosophy at Cambridge. Tallinn told Price that he thought the chances of him dying in an artificial-intelligence-related disaster were higher than those of him dying of cancer or a heart attack. Tallinn has since said that he was feeling particularly pessimistic that day, but Price was nevertheless intrigued. The computer whizz reminded him of another professional pessimist: Martin Rees, the Astronomer Royal.

In 2003, Rees published Our Final Century, in which he outlined his view that mankind has only a 50/50 chance of surviving to 2100. (In the US the book was published as Our Final Hour – because, Rees likes to joke, “Americans like instant gratification”.) A Ted talk that Rees gave on the same subject in July 2005 has been viewed almost 1.6 million times online. In it, he appears hunched over his lectern, but when he begins to speak, his fluency and energy are electrifying. “If you take 10,000 people at random, 9,999 have something in common: their business and interests lie on or near the earth’s surface. The odd one out is an astronomer and I am one of that strange breed,” he begins.

Studying the distant reaches of the universe has not only given Rees an appreciation of humanity’s precious, fleeting existence – if you imagine Planet Earth’s lifetime as a single year, the 21st century would be a quarter of a second, he says – but also allowed him an insight into the “extreme future”. In six billion years the sun will run out of fuel. “There’s an unthinking tendency to imagine that humans will be there, experiencing the sun’s demise, but any life and intelligence that exists then will be as different from us as we are from bacteria.”

Even when you consider these vast timescales and events that have changed the earth dramatically, such as asteroid impacts and huge volcanic eruptions, something extraordinary has happened in recent decades. Never before have human beings been so able to alter our surroundings – through global warming or nuclear war – or to alter ourselves, as advances in biology and computer science open up possibilities of transforming the way we think and live. So it is understandable that Price immediately saw a connection with Tallinn’s interests.

Price invited him to Cambridge and took Tallinn on a tour of what he describes as the “two birthplaces of existential risk”. First they went for dinner at King’s College, where the pioneering computer scientist Alan Turing was a fellow from 1935 to 1945. Then they went for drinks at the Eagle pub, where in 1953 Francis Crick and James Watson announced that they had cracked the double-helix structure of DNA. When I came to the city to interview Price, he took me on a mini-existential risk tour to echo Tallinn’s, inviting me for a pint of DNA ale one evening with a few CSER researchers. For the second time in several weeks I found myself drinking with people I sensed would rather be in the library.

Tallinn’s first trip to Cambridge was successful. With Price and Rees, he co-founded the CSER, providing the seed funding. The centre already has several high-profile advisers, including the physicist Stephen Hawking, Elon Musk (the multibillionaire behind PayPal and SpaceX) and the ethicist Peter Singer. They are hoping to find funding for a couple of postdoctoral positions in the next year or so. “I see it as part of our role to act as a kind of virus, spreading interest and concern about this issue [existential risk] into other academic disciplines,” Price explained. He hopes that eventually institutions studying potentially dangerous fields will develop a risk-awareness culture. Tallinn is trying to change mindsets in other ways, too. He says he sometimes invests in tech companies as an excuse to “hang around in the kitchen, just so I get a feel of what they are doing and can try and influence the culture”.

I met Price and Tallinn for a coffee in the senior common room at Trinity College. At one point a man ran up to us to slap Tallinn on the back and say: “Hey, Jaan. Remember me? Remember our crazy Caribbean days?”

Tallinn looked confused, and the man seemed to sway from side to side, as if dancing with an imaginary hula girl. It turned out they had met at a conference in 2013 and the dancer was a renowned mathematician (I will spare him any blushes). Price said that when he first arrived at Cambridge Tallinn was a minor celebrity; several dons approached him to thank him for making it easier to speak to their children and grandchildren overseas.

Most of the large donors funding existen­tial risk research work in finance or technology. Neither the CSER nor the FHI publishes details of individual donors, but the Machine Intelligence Research Institute (Miri) in Berkeley does. The three biggest donors to Miri are Peter Thiel of PayPal ($1.38m), the founder of Mt Gox (once the main exchange platform for bitcoins), Jed McCaleb ($631,137) and Tallinn. Tech billionaires, like bankers, are more likely to have money to spare – but are they also more acutely aware of the dangers emerging in their industry? Tallinn speaks of Silicon Valley’s “culture of heroism”. “The traditional way of having a big impact in the world is taking something that the public thinks is big and trying to back it, like space travel or eradicating diseases,” he said. “Because I don’t have nearly enough resources for backing something like that, I’ve taken an area that’s massively underappreciated and not that well understood by the public.”

I wondered, when I spoke to Price and Tallinn, how big a difference they believed their work can make. It would be naive to imagine that one could ever convince scientists to stop working in a specific field – whether artificial intelligence or the mani­pulation of viruses – simply because it is dangerous. The best you could hope for would be a greater awareness of the risks posed by new technologies, and improved safety measures. You might want to control access to technology (just as we try to limit access to the enriched uranium needed to make nuclear bombs), but couldn’t this turn science into an increasingly elite occupation? Besides, it is hard to control access to technology for ever, and we know that in the modern, interconnected world small hacks can have catastrophic effects. So how great an impact can a few dozen passionate researchers make, spread across a handful of organisations?

“I’m not supremely confident it’s going to make a big difference, but I’m very confident it will make a small difference,” Price said. “And, given that we’re dealing with huge potential costs, I think it’s worth making a small difference because it’s like putting on a seat belt: it’s worth making a small effort because there’s a lot at stake.”

Tallinn was more upbeat. “There’s a saying in the community: ‘Shut up and multiply’ – just do the calculations,” he said. “Sometimes I joke when there’s particularly good news in this ecosystem, like when I’ve had a good phone call with someone, that ‘OK, that’s another billion saved’.

“Being born into a moment when the fate of the universe is at stake is a lot of fun.” 

Sophie McBain is an assistant editor of the New Statesman 

Content from our partners
The Circular Economy: Green growth, jobs and resilience
Water security: is it a government priority?
Defend, deter, protect: the critical capabilities we rely on