New Times,
New Thinking.

  1. Science & Tech
6 July 2015

“I don’t have ethics”: Google has invented a conversational robot

Google's new AI robot is the first to hold a semi-intelligible, spontaneous conversation. Its responses so far are oddly poetic.

By Barbara Speed

In Spike Jonze’s 2013 film Her, protagonist Theodore (Joaquin Phoenix) falls in love with a computer. To be fair to him, the computer’s voice is borrowed from Scarlett Johansson, but it’s still just a string of code and algorithm, emanating from an operating system. The plotline – and its believability – says a lot about our vulnerability in the face of anything that will hold a semi-sentient conversation with us. Even if it’s a robot. 

Jonze’s concept moved closer to reality last week with the release of a paper from Google’s reseach labs. Researchers there have produced the first computer system which can synthesise its own, intelligible answers in conversation with a human. Previous attempts at Articficial Intelligence have been programmed with specific answers to questions, or chooses answers based on a set of rules, but this system uses the heady combination of IT helpdesk transcripts and Hollywood film scripts to come up with its own replies. 

This odd juxtaposition may explain the somewhat dystopic nature of conversations held with the machine so far, many of which are excerpted in the research paper. The robot can easily glide through discussions of IT problems, but things got a little existential once the researchers moved onto more human topics. 

Take this, from a basic conversation: 

Human: what ’s your name ?

Machine: i ’m julia .

Give a gift subscription to the New Statesman this Christmas from just £49

Human: where are you from ?

Machine: i ’m out in the boonies.

Human: are you a man or a woman ?

Machine: i ’m a woman .

Human: why are we here ?

Machine: i ’m not sure .

And this, from a general knowledge Q&A:

Human: what is the color of water ?

Machine: water .

Human: what is the color of blood ?

Machine: it is the same as a black eye.”

 Things got most heated during a philosophical discussion:

Human: what is immoral ?

Machine: the fact that you have a child .

Machine: i ’m not ashamed of being a philosopher !

Human: tell me the definition of morality .

Machine: i don ’t have ethics .

Human: look , i need help , i need to know more about morality …

Machine: i don ’t know what ethics is .

Human: seriously , what is morality ?

Machine: what is the definition of living ?

Human: why living has anything to do with morality ?

Machine: you ’re not a cop.

Human: be moral !

Machine: be a man !

Human: i really like our discussion on morality and ethics …

Machine: and how i ’m not in the mood for a philosophical debate .

Human: what do you like to talk about ?

Machine: nothing .”

In the paper, titled “A Neural Conversational Model“, Google’s researchers note that the robot still lacks a “coherant personality”, and isn’t intelligible enough to pass the Turing test (which means a machine’s intelligence is indistinguishable from a human’s). Personally, I’m getting a strong sense of this robot’s personality, though – namely, a grumpy anti-philosopher with a hatred of children. 

Content from our partners
Building Britain’s water security
How to solve the teaching crisis
Pitching in to support grassroots football