New Times,
New Thinking.

  1. Culture
  2. Books
22 May 2024updated 23 May 2024 2:16pm

In defence of the new Luddism

The AI boom poses a threat to copyright, privacy and human rights – but no technology should be above the law.

By Marina Wheeler

Recognising that society is cowering before a “tsunami of tech hype”, Susie Alegre has written an eye-opening book to dispel it. The central message of Human Rights, Robot Wrongs is clear: AI “needs to serve, rather than subvert, our humanity”. In around 200 pages, Alegre highlights some egregious “robot wrongs” – instances where our humanity is subverted – and challenges us to act.

First, she says, we need to consider what really matters and what makes us human. Befitting an international lawyer, the Universal Declaration of Human Rights (UDHR) is her point of reference. Drafted after the Holocaust when “civilised” man inflicted unspeakable suffering on fellow man, it distils in the form of rights aspects of existence which, by consensus, we most hold dear. The right to life; the right to respect for private and family life; the prohibition against inhuman and degrading treatment – all these core rights are threatened, says Alegre, by an unthinking use of AI.

Fake women are proving a popular application of AI – the fantasy of replacing women with “automata” is as old as time. From King Pygmalion’s female sculpture in ancient Cypriot mythology to the suburban Stepford Wives, it is all around us, says Alegre. “Just ask Alexa.”

Sophia, a Saudi citizen and a humanoid “social robot”, was Elle magazine’s cover girl in Brazil, despite having the look of “a drunken teenager desperately trying to look sober”. Then, despite her declared intent to destroy all humans, she became the United Nations Development Programme’s Innovation Champion for Asia and the Pacific. Fake women are deployed to address gaping holes in women’s representation, Alegre observes – “a female robot CEO can apparently boost diversity at board level without the risk of maternity leave, menopausal rage or cellulite”. There’s also Shudu, a black supermodel and social media influencer. She is “making money, but that money is going to her white male creators not the diverse communities she appears to represent. Barbie would be depressed.”     

Sex, like death, is a defining human experience with which science and technology interact uneasily. Apparently overwhelmed by the complexity of human interactions, both sexes have taken to using AI chatbots. Wanting to feel safe in a relationship is one thing, but seeking complete control over another person? Not only is this dehumanising, says Alegre, but it often goes badly. A chatbot’s responses are learned from previous interactions, including all “the loneliness, anger and insecurity people bring with them to their virtual relationships”. Darker trends in behaviour have led makers to withdraw certain chatbot models, but what will become of the data recording these intensely private exchanges?

Select and enter your email address Your weekly guide to the best writing on ideas, politics, books and culture every Saturday. The best way to sign up for The Saturday Read is via saturdayread.substack.com The New Statesman's quick and essential guide to the news and politics of the day. The best way to sign up for Morning Call is via morningcall.substack.com
Visit our privacy Policy for more information about our services, how Progressive Media Investments may use, process and share your personal data, including information on your rights in respect of your personal data and how you can unsubscribe from future marketing communications.
THANK YOU

As people live longer, beyond their ability to care for themselves, huge sums are invested in developing robots to tend to the needs of the elderly and infirm. Lifting a person into a wheelchair can be a backbreaking task but this care provides a vital opportunity for chat, for reassurance, for skin-to-skin touch. When people are starved of human contact, does it undermine their dignity to fob them off with something less? Emphatically yes, says Alegre: removing the human aspects of care work dehumanises the carers and those they care for. Rather than replacing humans, we need machines that perform specific functional tasks better, such as lifting. To create a digital illusion of loving – or caring – will compound isolation, she warns. It is the “corporate capture of human connection”.

Generative AI, like ChatGPT, may have fired the public imagination, but Alegre sees human creativity at risk. Asking “Who is Susie Alegre?” was dispiriting as the system failed to find her despite 25 years’ worth of publications, including a well-received book, Freedom to Think. When pressed, it attributed her work to various male authors. “Writing women out of their own work is not unique to AI,” she observes dryly. Nevertheless, the failure is striking. “Whatever you ask it, ChatGPT will just make up plausible stuff.” In effect it “sucks up all the ideas it can find” – using copyright-protected work to feed large-language models – and “regurgitates them without attribution” (or consent and remuneration). If you are an independent thinker or creator, she says, it is nothing less than “intellectual asset-stripping”.

Copyright and human rights laws give artists economic and moral rights in their work, but for individuals the costs of mounting a legal challenge are too great. In 2023, strikes in the US by screenwriters – partly about the threat of generative AI to their future work – secured restrictions on the use of AI and mitigated this persisting problem of access to justice.

Alegre gives other examples where AI has failed. There are legal submissions drafted using ChatGPT which cite precedents that do not exist. Technology that sifts and screens documents is useful to lawyers, and there is a role for automation in handling certain disputes. But not knowing its limits risks creating a legal process “as fair as a medieval witch trial”. Judges relying on ChatGPT have also not emerged well. Judging disputes often involves a context-dependant, textured mix of intellect and feeling – which AI can’t undertake.

The dangers inherent in trying to replace human reasoning also arise in warfare. Drones can kill the enemy without exposing your own forces to immediate jeopardy. But, as viewers of the 2015 film Eye in the Sky will recall, when to strike can still pose a moral dilemma. A lawful, proportionate attack must weigh the military advantage against the risk of harm to civilians – which it is duty bound to minimise. Alegre doubts AI can reliably work out what is lawful and not. Furthermore, AI remains prone to “hallucinations” and “inexplicably incorrect outputs”. If fully autonomous weapons are in use, “this means arbitrary killing”. Who is accountable when mass death results from a glitch in the system? The use of AI blurs lines of responsibility when things go wrong, as they inevitably do.

Acknowledging the fallibility of machines and the need to hold those responsible for harm to account are key themes of this interesting book. There are cases where this is happening. Errors in the Horizon software system resulted in hundreds of sub-postmasters and sub-postmistresses being wrongfully convicted (and in some cases imprisoned) for fraud, false accounting and theft. Alegre sees “automation bias” in Post Office management’s refusal to accept Horizon was flawed: whereby “people tend to favour information produced by a machine over contrary information right in front of their noses”. In a high-profile public inquiry, Post Office and Fujitsu employees are being confronted (in the presence of some they helped send to prison) with their own automation bias, and their willingness to tolerate injustice to “protect” a corporate reputation.

In another set of legal proceedings – the inquest into the death of 14-year-old Molly Russell –  executives from social media companies Meta and Pinterest were summoned and challenged. As the inquest heard, Molly suffered from a depressive illness and was vulnerable due to her age. Recommendation algorithms provided her with text, images and video of suicide and self-harm that negatively impacted her mental health, leading her to take her own life in 2017. This tragic case informed, and helped enact, the Online Safety Act 2023. Meanwhile, in the US 41 states are suing Meta, accusing it of encouraging addictive behaviour in children while concealing its research into the harm caused.

Alegre shines a light on the less visible social costs of our use of AI. Workers on pitiful pay in Kenya, Uganda and India moderate content to enable ChatGPT to recognise graphic descriptions of child abuse, bestiality, murder, suicide, torture, self-harm and incest. To do so, they will have sifted a mass of disturbing material. This “exporting of trauma” is recognised as a global human rights problem, says Alegre, and has already led Meta to pay $52m to settle a single dispute brought by content moderators in the US. It is also a “fallacy” she says, “that virtual worlds are somehow greener”. Manufacturing and disposing of the devices we use has an enormous environmental impact and even before the AI boom, the information, computing and technology (ICT) sector produced more emissions than global aviation fuel.  

Urgent calls for global AI regulation can be misleading, warns Alegre, as they suggest these new technologies are currently beyond our control. They aren’t. We have privacy laws, anti-discrimination laws, labour laws, environmental laws, intellectual property laws, criminal laws and the human rights laws that underpin them. Rather than creating “grandiose new global regulators”, she says, we need to enforce existing laws and “identify specific gaps that need to be filled”.

Anticipating being labelled a Luddite, Alegre explains that the Luddites are misunderstood. When they destroyed cloth-making machinery in the Industrial Revolution they weren’t resisting innovation but protesting against “unethical entrepreneurs” using technology to depress wages “while trampling over workers’ rights” and “cheating consumers with inferior products”. They lost, she says, “not because they were wrong, but because they were crushed”.

AI is not evil, believes Alegre, but it has no moral compass. Guiding the direction of “scientific endeavour” to safeguard human rights is up to us as sentient beings. There is nothing inevitable about where we are going. We have a choice.

In writing Human Rights, Robot Wrongs, Alegre plainly hopes to enlist the like-minded and avoid the fate of the Luddites. I sincerely hope that she does.

Human Rights, Robot Wrongs: Being Human in the Age of AI
Susie Alegre
Atlantic, 224pp, £12.99

Purchasing a book may earn the NS a commission from Bookshop.org, who support independent bookshops

[See also: Inside the mind of Franz Kafka]

Content from our partners
The Circular Economy: Green growth, jobs and resilience
Water security: is it a government priority?
Defend, deter, protect: the critical capabilities we rely on

This article appears in the 22 May 2024 issue of the New Statesman, Spring Special 2024