New Times,
New Thinking.

  1. Science & Tech
6 January 2021updated 07 Jan 2021 12:10pm

The Road to Conscious Machines is an accessible and highly readable history of artificial intelligence

As the respected computer scientist Michael Wooldridge explains, AI is the story of an effort to impose the order of mathematics on to the messiness of the real world.  

By Will Dunn

In 2016, the animator Hayao Miyazaki – director of the films My Neighbour Totoro and Spirited Away – was invited to visit a video games company in Japan called Dwango. The technologists at the company wanted to build “a machine that can draw pictures like humans do”. Using software called “deep learning”, the team had created a 3D model of a human body that had taught ­itself to move around – albeit in an odd, inhuman manner. The technologists hoped the great director would be impressed by their creation – a character that could animate itself. Miyazaki watched the demonstration in silence, thought for a few moments, and then told them: “I am utterly disgusted. I would never wish to incorporate this technology into my work… I feel strongly that this is an insult to life itself.”

Why do we worry about artificial intelligence? Beyond fears that AI might make us redundant, brainwash us or blow us up – concerns that are not in themselves trivial – there is a more fundamental revulsion. The price of technological progress is what Marx called Entfremdung – our estrangement from other people, our work and our possessions. Humans have ­outsourced their morality to God, their creativity to their employers, their desires to Amazon and their friendships to social media. The last unalienated part of human life is thought itself, and the proposal that machines could do our thinking for us ­is a threat to that which is most essential to our humanity.

[see also: Why we need to get AI right from the start]

Fortunately, as the respected computer scientist Michael Wooldridge illustrates in this accessible and highly readable explanation of the field, most people who make large claims about AI either don’t understand it or are deliberately making things up. For more than 80 years, technologists have told investors and the public that thinking machines are about to change everything. These claims are always situated just far enough in the future that no one can do anything but be impressed by the progress they supposedly represent. In 2017, for example, advisers told the UK government that AI would add £630bn to the UK economy by 2035, almost doubling the country’s economic growth. More than four decades earlier, in 1972, a report by James Lighthill, who preceded Stephen Hawking as the Lucasian professor of mathematics at Cambridge University, wondered at fellow “respected scientists” who claimed, with straight faces, that “all-purpose intelligence on a human scale” would be possible by the early 1980s.

The beginning of the AI project was itself an investigation into the impossible. In 1935, Alan Turing set out to solve the Entscheidungsproblem (“decision problem”) posed by the German mathematician David Hilbert at a conference in 1928, which asked whether every question in mathematics can be “decided” – solved with a “yes” or “no” answer. Turing used theoretical computers, called Turing machines, to prove there were indeed problems for which calculation alone could not provide a solution. The race began to compute the incomputable.

The story of AI is the story of the attempt to impose the order of mathematics on to the messy business of the real world, where there is a great deal that is incomputable. Reality is given to “combinatorial explosion”, in which effects pile upon effects, creating dizzying complexity. This can be seen on a chess board. There are 400 possible outcomes from the 20 possible opening moves, and every one of these 400 possible game-states is multiplied by every possible response. Thus the number of possible games proliferates exponentially with every move; by the time each player has moved five times, there are more than 69 trillion possible games. In 1950, the American mathematician Claude Shannon calculated the total possible games in chess to around ten to the power of 120, a figure higher than the number of atoms in the known universe. If this is the potential contained in an orderly eight-by-eight board, with six types of piece and a few simple, unbreakable rules, think of the complexities that would emerge in trying to compute the possibilities of a conversation, or the behaviour of cars on a motorway.

This didn’t stop some computer scientists from attempting to build consciousness by brute force. In 1984, a gifted engineer called Douglas Lenat began trying to teach his system, Cyc, all the facts that humans know about the world and all the connections between them, in the hope that a system with enough knowledge would begin to self- educate and to join us in, as he put it, “consensus reality”. Lenat and his team have been busy at this task for 36 years.

Give a gift subscription to the New Statesman this Christmas from just £49

[see also: The geopolitics of artificial intelligence]

The more promising experiments in AI have been those which take their cue from evolution – those which, faced with ungraspable complexity, say: don’t bother. Human brains have developed heuristics – mental shortcuts, rules of thumb – to arrive at conclusions quickly, without showing their working. Successful computer science has followed, abandoning perfection in favour of results. It was this approach that allowed an IBM supercomputer to beat world chess champion Garry Kasparov in 1997: IBM’s system did not try to “solve” chess by computing every possible game – it’s debatable whether technology will ever achieve such a feat – but looked, as a human player would, for the most promising strategy.

In 2014, there was a new surge of AI enthusiasm after Google bought a small London-based technology company called Deepmind for £400m. The systems Deepmind builds are also based on emulating nature, by copying the way neurons communicate in the brain, in virtual structures called “neural nets”, which are not given rules through programming but acquire them through experience, as we do. These systems have accomplished tasks in image recognition and game-playing that outsmart everything that has gone before. But while extremely sophisticated, the largest of these systems remains many orders of magnitude less complex than a human brain, which itself needs years of education before it can perform basic tasks.

In this diligent and reassuring explanation of the immense difficulty of recreating intelligence in a machine, Michael Wooldridge succeeds not only in writing an engaging history of AI, but in telling us something about the fabulously complicated structures on which our own consciousness rests. In a world in thrall to technology and the promise of innovation, computing remains a pale imitation of life itself. 

The Road to Conscious Machines: The Story of AI 
Michael Wooldridge
Pelican, 416pp, £20

 

Content from our partners
Building Britain’s water security
How to solve the teaching crisis
Pitching in to support grassroots football

This article appears in the 06 Jan 2021 issue of the New Statesman, Out of control