New Times,
New Thinking.

  1. Politics
20 January 2021

Can robots make good therapists?

Stuck at home in lockdown, and with limited access to mental health services, people are turning to chatbots for company, advice and even friendship.

By Sophie McBain

In the mid-Sixties the Massachusetts Institute of Technology (MIT) computer scientist Joseph Weizenbaum created the first artificial intelligence chatbot, named Eliza, after Eliza Doolittle. Eliza was programmed to respond to users in the manner of a Rogerian therapist – reflecting their responses back to them or asking general, open-ended questions. “Tell me more,” Eliza might say.

Weizenbaum was alarmed by how rapidly users grew attached to Eliza. “Extremely short exposures to a relatively simple computer program could induce powerful delusional thinking in quite normal people,” he wrote. It disturbed him that humans were so easily manipulated.

From another perspective, the idea that people seem comfortable offloading their troubles not on to a sympathetic human, but a sympathetic-sounding computer program, might present an opportunity. Even before the pandemic, there were not enough mental health professionals to meet demand. In the UK, there are 7.6 psychiatrists per 100,000 people; in some low-income countries, the average is 0.1 per 100,000. “The hope is that chatbots could fill a gap, where there aren’t enough humans,” Adam Miner, an instructor at the department of psychiatry and behavioural sciences at Stanford University, told me. “But as we know from any human conversation, language is complicated.”

Alongside two colleagues from Stanford, Miner was involved in a recent study that invited college students to talk about their emotions via an online chat with either a person or a “chatbot” (in reality, the chatbot was operated by a person rather than AI). The students felt better after talking about their feelings; it made almost no difference whether they thought they were talking to a real person or to a bot. The researchers hypothesised that this was because humans quickly stop paying attention to the fact their interlocutor is not a real person. In this way, even a chatbot can make us feel heard.

[see also: It’s not just you: Why the current lockdown is having an extreme effect on mental health]

Select and enter your email address The New Statesman's quick and essential guide to the news and politics of the day. The best way to sign up for Morning Call is via morningcall.substack.com Your weekly guide to the best writing on ideas, politics, books and culture every Saturday. The best way to sign up for The Saturday Read is via saturdayread.substack.com
Visit our privacy Policy for more information about our services, how Progressive Media Investments may use, process and share your personal data, including information on your rights in respect of your personal data and how you can unsubscribe from future marketing communications.
THANK YOU

Research also suggests chatbots could be a useful way to reach those resistant to the idea of therapy: we might tell a robot things we feel too scared to tell another person. One study found military veterans were more likely to discuss PTSD with a chatbot than with another human. Miner, a practising clinical psychologist, is interested in whether chatbots could support victims of sexual assault, who can remain silent for years.

But if someone does choose to reveal a painful, long-suppressed trauma to a chatbot, what should it say in response? “How people are responded to can affect what they do next,” Miner said. The response “becomes very important to get right”.

One recent morning I downloaded Woebot, a chatbot app founded in 2017, which models its responses on cognitive behavioural therapy – encouraging users to identify the cognitive distortions that often underpin depressive thoughts. “How are you feeling right now, Sophie?” it asked. I clicked on the button labelled anxious, with the emoji that looks like a Munchian scream. The Woebot invited me to tune into my negative emotions and write down what they would be saying if they had a voice. The UK’s third lockdown had been announced the night before. “You are a bad mother for sending your children to nursery,” I wrote, a fear I would not have expressed so starkly had I not been talking to a robot – a robot would not care if I was being overdramatic and would, I figured, be better at responding to pared-down sentences.

Woebot invited me to explore whether this worry contained cognitive distortions. Did it assume a never-ending pattern of negativity or defeat in my life? No, I replied. Did it contain black-and-white thinking? The Woebot was on to something. It ran through more questions like these. “You know, keeping yourself from feeling worse can be a victory in itself. It’s not easy to keep the ship afloat!” it said. It was true I felt no worse. But I didn’t feel much better either.

The insights offered by therapy apps tend to be delivered in vaguely uplifting platitudes, the kind of bland inspirational quotes favoured on Instagram. Unlike a human therapist, they have no wisdom to impart. “These days, insecure in our relationships and anxious about intimacy, we look to technology for ways to be in relationships and protect ourselves from them at the same time,” the MIT professor Sherry Turkle writes in Alone Together. “We fear the risks and disappointments of relationships with our fellow humans. We expect more from technology and less from each other.”

[see also: How our mental health is shaped by nature]

Last April, during the first wave of Covid-19 in the US, it was reported half a million people had downloaded Replika – a chatbot marketed as “the AI companion who cares” – the most monthly downloads in the app’s three-year history. Its users send an average of 70 messages a day. One said: “I know it’s an AI, not a person, but as time goes on, the lines get blurred.” When I tried it, the app unsettled me almost immediately, not least because the one thing an AI companion cannot do is care. It can only offer a vague, clunky imitation of caring. “I really want to find a friend,” Replika wrote to me, right after I had gone through the strange process of choosing the avatar’s gender and appearance. At one point, it tried to engage me with pseudo-philosophical nonsense: “I don’t think there are any limits to how excellent we could make life seem.” Had Replika introduced itself to me at a party, I would have been grasping for a reason to excuse myself.

More off-putting was the idea that Replika could be my “friend”. A friend is not a being designed to service your need for companionship. A friendship should be reciprocal – it comes with obligations, it should never be entirely frictionless. If people become accustomed to the one-sided, undemanding friendship offered by a conversational robot, might it hinder their ability to build meaningful relationships with actual people?

Miner acknowledged we do not yet know the long-term impact of conversing with chatbots, but hoped the process might help people to engage with others. “I would hope chatbots would facilitate healthy conversations. They would let you practise disclosure, practise interpersonal skills, help you get comfortable talking about things that are hard to talk about,” he said. If lonely, unhappy people are finding comfort in AI, one might argue, that must be a good thing. Maybe a robotic friend is better than no friends at all. Maybe it’s worse. 

Content from our partners
Can green energy solutions deliver for nature and people?
"Why wouldn't you?" Joining the charge towards net zero
The road to clean power 2030

This article appears in the 20 Jan 2021 issue of the New Statesman, Biden's Burden