In his 1965 paper “Speculations Concerning the First Ultra-intelligent Machine”, the mathematical prodigy and Bletchley code-breaker IJ Good wrote:
“The survival of man depends on the early construction of a super-intelligent machine… Let an ultra-intelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultra-intelligent machine could design even better machines; there would then unquestionably be an “intelligence explosion”, and the intelligence of man would be left far behind. The first ultra-intelligent machine is the last invention that man need ever make.”
By 1998, Good had radically altered his stance. In an autobiographical fragment written in the third person, he revealed that he:
“… now suspects that “survival” should be replaced by “extinction”. He thinks that because of international competition, we cannot prevent the machines from taking over. He thinks we are lemmings.”
Good was not the first to anticipate that machines might take over from their human inventors. The Victorian novelist and evolutionary theorist Samuel Butler (1835-1902) predicted that “the time will come when machines will hold the real supremacy over the world”. In his profound and pioneering study Darwin Among the Machines (1997), where he discusses Butler and Good, the historian of science George Dyson summarised the process through which humankind would be surpassed: “In the game of life there are three players at the table: human beings, nature and machines. I am firmly on the side of nature. But nature, I suspect, is on the side of the machines.”
Where Good was distinctive was in thinking that super-intelligent machines could end the human era. He had seen the Enigma machine saving civilisation from Nazism. It was natural for him to believe that super-intelligent machines could solve the problems of society. “They would need to do so,” he wrote, “in order to compensate for the problems created by their own existence.” If they generated technological unemployment, for example, they would frame policies for mitigating it.
[See also: What happens if the Russian state collapses?]
Good’s change of mind did not come from believing the machines would be hostile to us. Rather, they would no longer be human tools. As they learned to replicate and improve themselves, they would amend their programming. Protocols requiring them to be human-friendly would be weakened or outwitted, and soon the machines would be beyond human control. If they destroyed us inadvertently, they would not care.
Good’s second thoughts are at the heart of James Barrat’s Our Final Invention: Artificial Intelligence and the End of the Human Era, first published in 2013. It remains one of the most arresting, compelling and prescient investigations of the implications of artificial intelligence (AI). In the introduction to the new edition that appeared in July, he notes how quickly machine minds have evolved. ChatGPT exhibits:
“… unexpected emergent properties, not all of which have been discovered. In fact, many of GPT-3 and 4’s skills such as writing computer programs, translating languages they were not trained on, and creating poetry, were unplanned… Other models revealed unanticipated capabilities including harmful ones like lying to bypass Captcha tests [designed to differentiate machine from human intelligence], contributing to a suicide after “therapy” sessions, and social engineering (emotional manipulation like professing love, claiming to have a soul, and asking for freedom).”
Some of AI’s risks are becoming clear. Autonomous battlefield robots, drones and computerised missiles take war to new levels of danger. Catastrophes that in the past were averted by human intervention – as in 1983 when the Soviet airman Stanislav Petrov stopped a nuclear war that would have been caused by a malfunctioning Russian satellite warning system – will be harder to prevent. Data capture could decide elections in Britain and the US as early as next year. The Hollywood actors’ and writers’ strike is the first response to waves of job losses that could run into tens of millions worldwide over the coming decade.
Attitudes to AI are starkly polarised. For the Google scientist Ray Kurzweil, the rise of intelligent machines is the prelude to the “Singularity”, an abrupt expansion in knowledge which will enable humans – some of them, at any rate – to escape biological death. Barrat follows Good in fearing the opposite, human extinction. The echoes of religion are unmistakable, and it is tempting to read this as a clash between rival apocalyptic myths. But Barrat is rigorously empirical in explaining the reasons for his alarm. As he points out, the designers of ChatGPT do not begin to comprehend how its capabilities developed. What is evolving is not a human-like super-intelligence but a new type of mind. A creation of human reason, AI is beyond human understanding.
Some of those working in the industry share Barrat’s concern. In March of this year, Elon Musk, along with thousands of others, signed an open letter demanding that AI labs pause their work, since “non-human minds” “might eventually outnumber, outsmart, obsolete and replace us”. Yet there is no chance of any pause. Commercial competition and geopolitical rivalry ensure that Big Tech and military programmes will push on with the technology as fast as they can. In May, a “godfather of AI” Geoffrey Hinton resigned from Google in order to warn of the risks, observing: “I think it’s quite conceivable that humanity is just a passing phase in the evolution of intelligence.”
There is little doubt that AI poses a threat to humankind. Equally, there is no prospect of humankind removing the threat. AI can be beneficial in analysing large amounts of data and automating complex tasks. It is already making inroads in law and medicine, and creating artistic and literary works rivalling those of humans. It could have an important role in preventing future pandemics and adapting to climate change. But AI is not simply a tool, and there is no way it can be directed only to achieving goals “we” prescribe for it. There is no species-wide agent that could guide new technologies, only shifting multitudes of human beings, all of them with conflicting values and purposes.
Once AI has entered the world, its evolution has a momentum of its own. Humankind can no more master it than it can halt climate change – in its current phase a product of human action that has also acquired its own momentum. No global authority is on the horizon that could curb “gain of function” research enhancing the lethality of viruses of the kind that may have precipitated the Covid pandemic via a lab leak in Wuhan. Human beings must somehow learn to live in the endangered world they have unwittingly created.
[See also: A day in the life of an AI mover-and-shaker]
AI is an existential challenge in part because it is a metaphysical shock. Particularly in the West, we are taught that consciousness is the archetypal human attribute, which makes our species uniquely valuable. But what if hyper-intelligent conscious machines arrive on the scene? Humans will be as obsolescent and useless as sundials and quill pens.
One response is to deny that machines, however smart, can be conscious. But it is hard to see why not. If our minds evolved in the material world, matter can be self-aware. The emergence of mind in machines is no more mysterious than it is in cats, gorillas and humans.
Thinkers who look forward to superhuman or post-human species always imagine them as more knowing, not more playful or funnier. These imagined superhumans never possess what is arguably humankind’s only unique attribute – a sense of the absurd. The superior species envisioned by techno-futurists are inflated versions of themselves, showing off their cleverness in a never-ending Ted talk – for some of us, a vision of hell.
Fortunately, there is no prospect of any such species coming into being. If AI is evolving in Darwinian fashion, chance will be a decisive factor. The geopolitical conflicts that prevent a pause in the development of AI systems may well blow them up. If superfast machines trigger a nuclear war, they will destroy much of their infrastructure and possibly themselves. Ultra-intelligent machines are as vulnerable to extinction as any other product of evolution.
There is no reason to expect the arrival of a global digital mind, as many seem to fear or hope. Like so much else in contemporary thought, the idea that evolution tends towards a single godlike intelligence is a relic of monotheism. The upshot could be more like Homer’s world of warring gods.
Conflicts between AI systems may wreck them, leaving human survivors subsisting in whatever remains. Some societies might opt out of the technology, while others might adjust to its risks and manage to use it mainly for purposes they judge beneficent. No one will be in charge, however. “Humanity” cannot take control of the evolution of AI, because humanity – understood as a collective agent – does not exist. Meanwhile, AI offers relief from the pains of being human.
Artificial intelligence may not lead to our extinction, but it poses a profound challenge to our humanness. The logic of AI is the progressive displacement of actual experience by mechanical simulacra. Instead of the daily encounters that enable communities to sustain a common life, random collections of solitary people are protected from each other and themselves by unblinking video surveillance. Rather than connecting in troublesome relationships, they are turning to cyber-companions for frictionless friendship and virtual sex. The contingencies of living in a material world are being swapped for an algorithmic dreamtime. The end-point is self-enclosure in the Matrix – a loss of the definitively human experience of living as a fleshly, mortal creature.
A depleted human species may linger on, but AI may still bring an end to the human era. If ever more people opt for a programmed existence in the techno-sphere, the human world will be emptied of meaning. What will be lost are the fugitive sensations of accidental lives – the defiant smile in the face of cruel absurdity, the glance that began a love that changed us forever, a tune it seemed would always be with us, tears in the rain.
John Gray’s “The New Leviathans: Thoughts After Liberalism” will be published in September by Allen Lane
Our Final Invention: Artificial Intelligence and the End of the Human Era
James Barrat
Quercus, 352pp, £10.99
Purchasing a book may earn the NS a commission from Bookshop.org, who support independent bookshops
This article appears in the 16 Aug 2023 issue of the New Statesman, Russia’s War on the Future