New Times,
New Thinking.

  1. Science & Tech
18 June 2022

The dangerous fallacy of sentient AI

Ignore the Silicon Valley messiahs. They only expose how little most of us know about the technology and its ethics.

By Philip Ball

The important question is not whether the Google engineer Blake Lemoine was right to claim that the artificial intelligence algorithm he helped to develop, called the Language Model for Dialogue Application or LaMDA, is sentient. Rather, the question is why a claim so absurd, and which is rejected by the vast majority of experts, has caused so much feverish media debate over the past week.

Sure, there is an interesting discussion to be had about whether AI could ever be sentient and, if so, when and how we might make one that is. The more immediate issue, however, is why today’s AI is so poorly understood, not just by lay people but apparently even by some Silicon Valley technicians, and whether we should be concerned about that.

Subscribe to The New Statesman today from only £8.99 per month
Content from our partners
Towards an NHS fit for the future
How drones can revolutionise UK public services
Chelsea Valentine Q&A: “Embrace the learning process and develop your skills”
Topics in this article : ,