
The important question is not whether the Google engineer Blake Lemoine was right to claim that the artificial intelligence algorithm he helped to develop, called the Language Model for Dialogue Application or LaMDA, is sentient. Rather, the question is why a claim so absurd, and which is rejected by the vast majority of experts, has caused so much feverish media debate over the past week.
Sure, there is an interesting discussion to be had about whether AI could ever be sentient and, if so, when and how we might make one that is. The more immediate issue, however, is why today’s AI is so poorly understood, not just by lay people but apparently even by some Silicon Valley technicians, and whether we should be concerned about that.