New Times,
New Thinking.

  1. Long reads
21 June 2023

Is AI a danger to humanity or our salvation?

The founders of AI are divided over what’s next.

By Harry Lambert

On 31 March, Geoffrey Hinton – a 75-year-old British-Canadian scientist who won the 2018 Turing Award, the “Nobel of computing”, for his foundational work in AI – resigned from Google, where he had guided AI research for a decade. One recent evening he returned to King’s College, Cambridge, where he had been an undergraduate in the 1960s, to address an expectant audience. Hinton had come to Cambridge to deliver a message: the threat of a super-powerful AI was real and immediate. “I decided to shout fire,” he told a packed lecture hall. “I don’t know what to do about it or which way to run.” It may be, he added, that humans are “a passing stage in the evolution of intelligence”.

Hinton was expressing a familiar fear. “It seems probable that once the machine thinking method had started,” Alan Turing predicted in 1951, looking ahead to a world of intelligent machines, “it would not take long to outstrip our feeble powers… At some stage therefore we should have to expect the machines to take control.”

Turing’s fear was now also Hinton’s.

For a long time, Hinton explained, he had believed that machine intelligence could not surpass biological intelligence. The scientific paper that made modern AI possible – “Learning representations by back-propagating errors”, co-written by Hinton and published in Nature – is 37 years old. The mathematical process behind it, which Hinton described as “taking a gradient in a computer and following it”, could not surpass that of the human brain, which has benefited from millions of years of evolution. “A few months ago, I suddenly changed my mind,” he said at the Cambridge event. “I don’t think there is anything special about people, other than to other people.”

Hinton speaks and moves with the energy of a younger man, despite having a slipped disc in his back which has prevented him from sitting down for much of the past 18 years. “I still can’t take this seriously, emotionally,” he concluded that night at his talk, reflecting on the stark implications of his fervent warnings.

The existential peril of AI is becoming a story of our times. In June 2022 Blake Lemoine, a Google software engineer, went public with his fear that LaMDA, a large language model (LLM) that Google had developed, was sentient, that it had feelings, and wanted to be respected. Google fired him a month later. In May 2023, a colonel in the US Air Force revealed a simulation in which an AI-enabled drone, tasked with attacking surface-to-air missiles but acting under the authority of a human operator, attacked the operator instead. (The US Air Force later denied the story.)

Rishi Sunak has positioned himself at the centre of the global response to these issues. On 24 May, at 10 Downing Street, he met Demis Hassabis, the British CEO of Google DeepMind, and Sam Altman, the CEO of OpenAI, the company behind ChatGPT, which is backed by Microsoft. The government has since committed £100m to a new AI safety taskforce.

Give a gift subscription to the New Statesman this Christmas, or treat yourself from just £49

ChatGPT, an LLM launched in November 2022, has shocked and delighted internet users with its capacity to cite information, generate code and respond to questions “in ways that are very human-like”, as Michael Luck, director of the King’s Institute for Artificial Intelligence in London, put it when we spoke. It acquired 100 million users within two months of its release, an internet record. (Google has since released a rival model called Bard.)

There is a mystery to these AI models. They have improved far more rapidly than experts expected. “The engineering has moved way ahead of the science,” Luck said. “We built the systems and they’re doing wonderful things, but we don’t fully understand how they’re doing it.”

Roger Grosse, an AI researcher and colleague of Hinton’s at the University of Toronto (where Hinton has taught since 1987), told me that for most of his 17-year career “things seemed to be moving very slowly. It hardly even made sense to talk about human-level AI”. But the recent rate of progress has surged. “We are past the point where the risks can be dismissed out of hand.” Those who do so “need to make more precise arguments for why our systems are safe”, he said.

“I decided to shout fire. I don’t know what to do about it or which way to run.”

What exactly are the risks of AI? For Luck, “existential risk is not the most pressing question”. “There are many more immediate concerns around bias and ethics,” he thinks. Existing AIs, trained on data sets full of unexamined biases, are shaping major decisions that companies take, ranging from job applications to the approval of loans.

Other experts echo Hinton’s disquiet. One of them is Yoshua Bengio, with whom Geoffrey Hinton shared the 2018 Turing Award. On 31 May, Bengio told the BBC that he feels “lost” over his life’s work. “It is challenging, emotionally speaking,” he said, as researchers are increasingly unsure how to “stop these systems or to prevent damage”.

In March, Bengio, a professor at the University of Montreal, signed a letter calling for a six-month halt to development of AIs that are “more powerful than GPT-4”, the language model that powers ChatGPT Plus, the premium version of OpenAI’s chatbot. Among thousands of others, the letter was co-signed by Elon Musk, the owner of Twitter, who warned in 2014 that the risk of something “seriously dangerous” happening in AI “is in the five-year time-frame. Ten years at most.”

But one major researcher thinks these fears are entirely unfounded: Yann LeCun, the third 2018 Turing Award winner alongside Hinton and Bengio, and Meta’s chief AI scientist since 2013. LeCun and Hinton have been friends for decades. Hinton taught LeCun. Now they are at odds. Hinton’s predictions about the impact of AI on jobs have been “totally wrong” in the past, LeCun tweeted on 28 May, having dismissed Hinton’s fear of “an AI takeover” as “very, very low-probability and preventable rather easily” on 2 April. “AI will save the world, not destroy it,” he wrote on 6 June, criticising the “destructive moral panic” being fostered by some, invoking Hinton in all but name.

In the days after his talk at Cambridge, I met Hinton at his home near Hampstead, north London. “Emotionally, I don’t believe what I’m saying [about AI],” he told me. “I just believe it intellectually.” What is stopping him from believing it? “Well, it’s too awful.” What is? “Them taking over. It’s not going to be pretty. Did you ever see a big change of power like that, that was pretty?” But couldn’t we turn off a machine that threatened us? “No, you can’t. You will be completely dependent on it by then.”

He continued: “This is a realm where we have no experience and no understanding of the possibilities, but they’re going to be very good at manipulation. And if they’ve got any sense, they won’t let us know that they’re much smarter than us. I’m not confident about any of these conclusions. But I’m confident that there’s not nothing to worry about.”

I suggested to Hinton that he was imputing to AI models motives we do not know they have. He anthropomorphised large language models throughout our conversation. He agreed that the motives of existing AIs are unknown, in so far as they have motives at all, but he thinks they will look to acquire more control so as to achieve the goals we set them. This is an open-ended risk. Hinton is trusting his intuition, having learned to do so over the decades in which his ideas were long ignored. His rise to pre-eminence as the first “god-father of AI” is recent. “My intuition is: we’re toast. This is the actual end of history.”

Illustration by Matthieu Bourel

[See also: There is no chance the government will regulate AI]

Hinton has been on a lifelong mission to understand the brain. He arrived in Cambridge in 1967 as the great-great-grandson of George Boole, whose algebraic logic had laid the foundations for computing, and a cousin of Joan Hinton, one of the few female physicists to work on the Manhattan Project, the American-led mission to create the atom bomb.

At Cambridge Hinton started and abandoned a series of degrees, in natural sciences, architecture, physics and physiology, and philosophy, before finally graduating in experimental psychology, having briefly dropped out to become a carpenter. He was unimpressed by what he had been taught at Cambridge. The only way to understand the brain, he decided, was to build one himself.

In 1973, shortly after Hinton graduated, the British government released a report into artificial intelligence by James Lighthill, the Lucasian professor of mathematics at Cambridge. Research into AI, Lighthill wrote, had been defined by a “pronounced feeling of disappointment” since the work of Turing. “In no part of the field have the discoveries made so far produced the major impact that was then promised.”

A general artificial intelligence, Lighthill predicted, would not be discovered in the 20th century. Progress on speech and image recognition – two early hopes among researchers – would be slow. “It is almost impossible to get an adequate training in the subject in Britain,” another academic, Stuart Sutherland, wrote in a commentary published with the report. Only one postgraduate course existed, at the University of Edinburgh, which Hinton joined in 1972. His mission to build a brain began in an era that is now described as an “AI winter” in Britain, with funding cut in the wake of Lighthill’s findings. But Hinton faced a bigger problem than a lack of resources: he disagreed with the intellectual foundations of the field.

Back then, AI researchers were convinced that the essence of intelligence was reasoning. Their aim was to craft a set of symbolic expressions to represent the reasoning process to create “a sort of cleaned up logic” of the mind, as Hinton would later describe it. This belief, or paradigm, was known as “symbolic AI”.

The symbolic AI paradigm held that thought could be rewritten symbolically in strings of words. But for Hinton words were only a way of representing what we are thinking. Thought itself was not linguistic. This was simply “a huge mistake”, Hinton said in 2017, after he had fought and won a decades-long battle against this once-orthodox view.

“My intuition is: we’re toast. This is the actual end of history.”

“The idea that thoughts must be in some kind of language is as silly as the idea that understanding the layout of a spatial scene must be in pixels: pixels come in, and if we had a dot-matrix printer attached to us, then pixels would come out,” he said then. “But what’s in-between isn’t pixels.”

Hinton believed that a thought, instead, is “just a great big vector of neural activity”, and that humans learn by neurons in the brain wiring themselves to one another at varying strengths. You could, therefore, recreate intelligence by building an artificial “neural net” which mirrored that process. You would simply need to write, or discover, an algorithm that could learn by adjusting the connection strengths between the artificial neurons in response to new data.

At Edinburgh Hinton’s ideas were merely tolerated. Christopher Longuet-Higgins, head of the department, believed in symbolic AI along with everyone else. But Hinton was emboldened by the great mathematicians of his childhood, Turing and John von Neumann (the pioneer of game theory who also worked on the Manhattan Project). Both men had also been inspired by the biology of the brain as a means of building an AI. But they died too young – Turing in 1954 aged 41, and Von Neumann in 1957 at 53 – to influence the nascent field.

In 1986, Hinton’s long apprenticeship in academic insignificance ended when Nature published the paper on “back-propagation” which he had co-written. He had helped discover the type of algorithms (a set of mathematical instructions) that could reweight the connections in a neural net. With that advance, a machine could in theory begin to learn on its own. Hinton and his co-authors showed that their neural net could take a partial family tree and describe various unstated relationships within it. Hinton’s work, Michael Luck told me, was “fundamental” to the development of the AIs we use today.

Hinton now had the algorithm to run an AI model, or the basis of one, but the computers of the 1980s were slow and the data sets on which neural nets could be trained were small. You need all three – prodigious stores of data, sufficient computing power and a set of algorithms – to give life to a neural net, just as you need fuel, oxygen and heat to start a fire.

But claiming that he did not have enough data or fast enough computers, Hinton said in Toronto on 1 March this year, “was seen as a lame excuse”. Most researchers believed his theory was wrong. He remained stuck on the fringes of AI; neural nets were not, in fact, even thought of as AI.

“Neural nets were almost abandoned by the scientific community,” Yoshua Bengio said in 2017, reflecting in disbelief. In 2010, when Hassabis set up DeepMind, the climate for AI was cold. “Nobody was talking about AI,” he said last year.

When Hassabis studied at Massachusetts Institute of Technology (MIT) in the late 2000s, he had been told that “AI doesn’t work”. Symbolic AI had been attempted in the 1990s, and academics he met thought him “mad for thinking that some new advance could be done with [deep] learning systems”. In 1986, Hinton had presented one of those MIT professors, Marvin Minsky, with his breakthrough paper on back-propagation. Minsky left the room without reading it.

In 2012, Hinton’s 40-year faith in his idea was rewarded, however. A network that he built with two of his students, which they called “AlexNet”, vastly outperformed competitors in the ImageNet Challenge, a major annual image-recognition contest. LeCun, the fellow AI pioneer who now rejects Hinton’s warnings, captured the magnitude of Hinton’s feat at an industry gathering later that year. “This is proof,” he told the audience, that machines could see.

The AlexNet paper written by Hinton and his students established neural nets, which Hinton had rechristened “deep learning” five years earlier, as the core mechanism of AI. It has since become one of the most cited papers in computer science. DNNresearch, the company hastily created by Hinton and his co-authors (one of whom is now chief scientist at OpenAI), was bought by Google for $44m. Hinton became “the man Google hired to make AI a reality”, as Wired magazine put it in 2014. Having spent much of his academic life being respectfully scorned, Hinton was now the pioneering researcher in his field.

[See also: We are finally living in the age of AI. What took so long?]

When I met Hinton at his London house on 30 May, he was alone. He is twice widowed, having lost both of his wives to cancer, in 1994 and 2018. “Early and accurate diagnosis” of illness, he said in 2017, “is not a trivial problem. We can do better. Why not let machines help us?”

He bought the house in Hampstead for when he and his siblings, who also live abroad, are in England. He grew up in Bristol, the third child of a difficult father, a fellow scientist who competed with his son. (“In another few years I’ll be over him,” Hinton would say as I left; his parents, he added, read the New Statesman.) I found him in much the same spirit I had observed at Cambridge: more lively, relaxed, and amused than his apocalyptic words warranted.

“We’re dealing with an event that’s never occurred before in human history, of something more intelligent than us,” he said, after offering me a tea and sitting on a stack of foam pads to ease his back (he soon stood up).

It was a compelling remark, but Hinton had used it before, in 2015. “There is not a good track record of less intelligent things controlling things of greater intelligence,” he told the New Yorker in a brief exchange. If that threat is not new to Hinton, what is? The speed at which a fearful future is accelerating into the present. “It shouldn’t have been so much of a surprise. I always thought AI would catch up with the brain. I didn’t think it would overtake it that quickly. That’s what’s really changed. It’s become urgent.”

In 2015, he had suggested that AI wouldn’t match the brain until at least 2070. Hadn’t this, I asked, been cause enough for alarm at the time? “Things that are going to happen in my children’s lifetime matter much more than things that aren’t. That’s just how people are,” he said.

ChatGPT, Hinton notes, has fewer neural connections than we do: in the order of one trillion, rather than the 100 trillion in the human brain (of which Hinton suggests at least ten trillion are used for storing knowledge). And yet existing AIs are already more competent than humans in certain tasks: speech and image recognition, coding, forms of gaming. In what realms will humans remain predominant once these models have as many connections as we do? Very few, thinks Hinton.

“This is proof,” he told the audience, that machines could see.

An AI can learn from its multiple separate interactions with the world, unlike humans, whose intelligence is individual and particular. Whenever any part of an AI network absorbs new information, every other part of the network will update. Humans, by contrast, learn through the painfully slow process of distillation.

Yet ChatGPT errs, all the time. It cannot be relied upon. In so far as people trust ChatGPT, it is dangerous not for what it knows, but for what it appears to know but does not. It is, Hinton suggests, “behaving like people”. When it errs, it is not lying or hallucinating but confabulating.

“We do it all the time. If I asked you to describe a memory from a long time ago, you will come up with a plausible story that fits all the little traces of things you have. You will come up with something that isn’t quite right, but it will convince you.” It fits enough of the things you know to be true.

ChatGPT works in much the same way. It does not retrieve information as a laptop does when you search it. Instead it runs your question through its network of connections, producing a response that best fits the data on which it has been trained. Ridding it completely of error will, Hinton said, be “more or less impossible”, but as AIs become self-aware – which he expects – they will correct themselves.

The AI pioneer Geoffrey Hinton, whose work is inspired by the biology of the brain. Photo by Chloe Ellingson / New York Times / Redux / Eyevine

In February, Hinton contacted Grosse, his University of Toronto colleague. He credits their conversation with encouraging him to quit Google and speak out. “Geoff emailed me,” Grosse said when we spoke, “to say he was concerned about the rate of AI progress, and that he thought that we had a good chance of ending humanity in the next 20 years.”

Grosse, who delivers sentences with the caution of a careful scientist, believes that existing deficiencies in AI models are a matter of degree, not of kind. AI systems are scaling in “extremely regular ways”. As you add more computing power and data, each generation of leading LLMs has, so far, become that much more capable. Current AIs are marked out by their perceptual understanding of words and images, but even their ability to reason – the preserve of symbolic AI, which was once considered the Achilles heel of deep learning programs – is now, Grosse says, approaching human-level. Grosse doubts that there is anything unique about the brain that cannot be simulated on a silicon chip.

Hinton’s concerns are recent – too recent for the third godfather of AI, Yann LeCun, to be moved by them.

“Geoff only started thinking about these things a couple of months ago,” LeCun tweeted after Hinton spoke at Cambridge. “Before then, he didn’t think machines were anywhere close to reaching human-level AI.” LeCun sees no cause for fear. We are still “missing something big” required for machines to match biological intelligence, he wrote on 14 April. Babies and animals understand “how the world works”. AIs do not.

“We’re not close to super-intelligence,” Mark Zuckerberg, the CEO of Meta and LeCun’s employer, said this month. That, he suggested, is “pretty clear to anyone working on these projects”. Hassabis of DeepMind agrees. “None of the systems we have today even have one iota of consciousness or sentience,” he said last July (prior to ChatGPT’s release). “That’s my personal feeling interacting with them every day.” Our brains, he suggested, are trained to interpret agency in even inanimate objects. We falsely associate language with agency.

David Barber, the director of the UCL Centre for Artificial Intelligence, shares their scepticism. “I’ve heard the arguments” about runaway intelligence, he told me, “but I don’t get a visceral sense of them. We don’t give these systems the agency they would need to carry out those threats.”

LeCun has said that he has “no doubt that superhuman AI systems will eventually exist”, while Hassabis made the sale of DeepMind to Google (in 2014 for £400m) conditional on the creation of an ethics board that would oversee any greater-than-human AI the company creates. But both are convinced that the models they are building today pose no threat.

[See also: The government must move more quickly on AI regulation]

I asked Hinton for the strongest argument against his own position. “Yann thinks its rubbish,” he replied. “It’s all a question of whether you think that when ChatGPT says something, it understands what it’s saying. I do.”

There are, he conceded, aspects of the world ChatGPT is describing that it does not understand. But he rejected LeCun’s belief that you have to “act on” the world physically in order to understand it, which current AI models cannot do. (“That’s awfully tough on astrophysicists. They can’t act on black holes.”) Hinton thinks such reasoning quickly leads you towards what he has described as a “pre-scientific concept”: consciousness, an idea he can do without. “Understanding isn’t some kind of magic internal essence. It’s an updating of what it knows.”

In that sense, he thinks ChatGPT understands just as humans do. It absorbs data and adjusts its impression of the world. But there is nothing else going on, in man or machine.

“I believe in [the philosopher Ludwig] Wittgenstein’s position, which is that there is no ‘inner theatre’.” If you are asked to imagine a picnic on a sunny day, Hinton suggested, you do not see the picnic in an inner theatre inside your head; it is something conjured by your perceptual system in response to a demand for data. “There is no mental stuff as opposed to physical stuff.” There are “only nerve fibres coming in”. All we do is react to sensory input.

The difference between us and current AIs, Hinton thinks, is the range of input.

ChatGPT is mono-modal. It has been trained on text alone, while other AIs can perceive and create images. But that will change. Future AIs will be multi-modal. They could act on the world by being attached to a robot. “Your understanding of objects changes a lot when you have to manipulate them,” Hinton said. (LeCun has described this as being “easier said than done”.)

The issue for Hinton is one of capability, not sentience, which he thinks is poorly defined. Marc Andreessen, a major Silicon Valley tech investor, is bewildered by Hinton’s view. “AI doesn’t want, it doesn’t have goals, it doesn’t want to kill you, because it’s not alive,” he said this month. “It is not going to come alive any more than your toaster will.”

There is a third man, or godfather, in the conflict between Hinton and LeCun: Yoshua Bengio, the other 2018 Turing Award winner. It is Bengio whose perspective captures the sudden shift in artificial intelligence. In 2017, he gave a Ted Talk in which he implored his audience to “set aside your fears” about AI. “Don’t be fooled,” he intoned – “we are still very, very far from a machine” that can “master many aspects of our world” as even a two-year-old does.

That was then. I spoke to Bengio on 12 June. His arguments in 2017, I said, sounded like LeCun’s today. “I used to say the same thing,” he conceded. “But then came ChatGPT. Without significant progress on the scientific side,” he said, “you have this incredible jump in intelligence. I did not expect us to pass the Turing Test for decades.”

That test – can you tell if you are talking to a man or a machine? – has long been a benchmark for progress in AI research.

“It’s very difficult to write a reward function that does exactly what you want. The AI finds loopholes.”

What has stunned Bengio is the leap made by ChatGPT in the absence of major algorithmic innovations. GPT-4’s leap in capability is purely a product of scaling: of the model being trained with more data on more powerful computers (of “putting more juice into the machine”, as Bengio put it). Much greater leaps may take place as researchers refine and reinvent the algorithms guiding these models. Bengio thinks he has a “pretty good idea of what’s missing” in existing AIs, and “it could be just a few years” until they are improved dramatically. “It could jump to a new plateau any time,” he told me.

Bengio is in touch with Hinton. Like him, Bengio thinks that human consciousness “may not be so magical”, and that a machine “might achieve the same thing in simpler ways”. Neither man sees a fundamental difference between the machine future and human present.

I put Andreessen’s quote to Bengio: AIs are no more alive than your toaster. “I wish,” he replied. Dismissing AI because it is not biologically alive is inadequate. The issue is whether AIs seek self-preservation, he said – and AI safety research has shown that even “if you don’t give it as an explicit goal, it becomes a goal”.

LeCun is relaxed about the super-intelligent future. He thinks it will be possible to restrain AIs by writing an “objective function” that deters them from acting malignly. Bengio is unconvinced. “The whole experience in deep learning,” he told me, “is that it’s very difficult to write a reward function that does exactly what you want. In general the agent [the AI] finds loopholes.”

Bengio is concerned that autonomy is “just a layer” that could be added to a super-intelligent AI in the future, whether by design or error. AI research is lightly regulated. There is nothing in place, he fears, to prevent researchers from coding a form of agency into AIs more intelligent than they are – and creating something far more powerful than they expect.

What does Geoffrey Hinton know? Perhaps more than anyone, he helped build the AI systems that he now believes may pose a grave threat to us all. He has travelled a long way to his present, and some might say alarmist, position. He spent most of his academic life on the outside, yet his warnings now sound around the world.

Elon Musk has spoken to Hinton in recent weeks. He too believes AIs will soon out-smart humans. “He called humans the boot-loader for digital intelligence,” Hinton says. “He thinks they’ll keep us around out of curiosity. The universe is more interesting with people in it.”

Hinton had a recent meeting, at 10 Downing Street, with Eleanor Shawcross, head of Sunak’s policy unit. He warned her of job losses ahead as a consequence of AI and advised that the government should tax the corporate gains in productivity. “I suggested she tell Sunak but maybe not mention it’s called socialism.”

Michael Luck wants to move the debate away from catastrophe. “It is really difficult to define human-level AI,” he told me. “There are lots of different elements to it. We do really complex reasoning, we generate text like ChatGPT, we can be creative, we respond to events. I don’t know to what extent human-level AI is imminent.”

Luck sees a distinction between the sentences we form, which have “substance and meaning” behind them, and those formed by ChatGPT, which are only “a manipulation of text”. Consciousness is not something he knows how to spot; referring to it, Luck thinks, only “complicates the discussion right now”.

For Roger Grosse, the question is whether AIs threaten humanity, and “it is irrelevant whether powerful AI systems are conscious or not”. They can be both clueless and highly capable. What matters is not consciousness, or even intelligence, but the ability of AIs to influence and control us. (In March, a Belgian man was discovered to have died by suicide after having had a series of exchanges over six weeks with Chai, an AI chatbot similar to ChatGPT, which encouraged his eco-anxiety.)

The question is: who will be dependent on whom? Geoffrey Hinton invoked the Neanderthals when we spoke. A better form of intelligence came along, “ever so slightly better”, and the Neanderthals were put on the path to extinction. He now believes that machine intelligence will be much greater than our own. The difference is that, for now, our AIs still need humans. They do not exist unless we operate them. The solution, Hinton said, may be to preserve that dependence indefinitely, to “keep it so that it can’t do without us”.

[See also: Only philosophy can beat AI]

Content from our partners
How Lancaster University is helping to kickstart economic growth
The Circular Economy: Green growth, jobs and resilience
Water security: is it a government priority?

This article appears in the 21 Jun 2023 issue of the New Statesman, The AI wars