New Times,
New Thinking.

  1. Long reads
15 August 2018updated 11 Sep 2018 8:12pm

How AI could kill off democracy

Are we willing to give up our freedom in exchange for efficient, data-driven public services owned and run by tech giants?

By Jamie Bartlett

In May this year, to whoops and yeahs, Google’s CEO Sundar Pichai unveiled Duplex, Google’s artificial intelligence (AI)-powered personal assistant. “Hi. Erm, I’d like to reserve a table,” said the machine to an unsuspecting restaurant receptionist, in a perfectly non-robotic tone. “For four.”

“For four people you can just come,” replied the oblivious human.

“How long is the wait, err, to be seated?” pursued the assistant, to the exhilaration of the crowd. This idiot doesn’t realise she’s talking to a machine!

“Oh no, it’s not too busy,” said the human. “You can come for four people, OK?”

“Oooooh, I gotcha, thanks.”

Select and enter your email address Your weekly guide to the best writing on ideas, politics, books and culture every Saturday. The best way to sign up for The Saturday Read is via saturdayread.substack.com The New Statesman's quick and essential guide to the news and politics of the day. The best way to sign up for Morning Call is via morningcall.substack.com
Visit our privacy Policy for more information about our services, how Progressive Media Investments may use, process and share your personal data, including information on your rights in respect of your personal data and how you can unsubscribe from future marketing communications.
THANK YOU

“Yep, bye-bye.”

Click.

This demo was the culmination of years of Google research, and the company will soon be testing the AI with its Google Assistant. But the reaction to Duplex’s eery imitation of a human was not as positive as the tech giant seems to have expected. “Silicon Valley is ethically lost, rudderless, and has not learned a thing,” tweeted tech critic Zeynep Tufekci, author of Twitter and Tear Gas.

Every generation has its own vision of dystopia – usually a dark reflection of whatever’s worrying the present. Duplex revealed ours: the idea that machines might replace humans. The upset surrounding the demonstration concentrated on the potential for manipulation and deceit – programming in the umms and ahhs of human conversation was especially criticised. (Google responded by pledging that the AI would identify itself as a machine.)

In a similar vein, a paper released in February by leading AI academics for Oxford’s Future of Humanity Institute outlined some of the terrifying security threats ahead: including explosive drones operated by anonymous criminals, and manipulation by smart machines perfectly imitating a loved one. And even if we survive all that, there’s the “singularity”, the point at which artificial intelligence’s continual self-improvement sparks a runaway, self-replicating cycle we can’t control.

However, there’s an equally worrying – and more likely – vision of dystopia, which is that these radical technologies work brilliantly, making our lives far easier and more convenient. Recoil from Duplex if you want, but people get over the creep factor very quickly. According to a recent PwC survey 14 per cent of British people already own a home assistant device such as Google Assistant or the Amazon Echo, and another 24 per cent plan to buy one. In the end, we prefer convenience over all else: privacy, social cohesion and even liberty. A world in which ever more decisions are made by machines and data may be more efficient. But it may not be very democratic.

There will inevitably be teething problems with Duplex, whether in the form of misheard instructions or dealing with angry staff, and so on. But technologies such as this only ever improve. Computing power keeps getting cheaper, data keeps flowing and AI keeps getting smarter. Forget the Hollywood depictions, because no machines are remotely close to reaching a human level of intelligence, which is defined as performing as well as humans in a series of different domains (often known as “general artificial intelligence”). Most experts don’t think this will be possible for another 50 to 100 years – but no one really knows.

That’s not where the real action is anyway. The rapid improvements in machine intelligence are in “domain-specific” AI, often using a technique known as “machine learning”. A human feeds an algorithm with data and teaches it what each input means. From that the AI can detect patterns, from which it can mimic a particular human behaviour or perform a very specific job.

That might mean listening to conversations and generating human-style responses to book a table. It might also mean driving up the interstate in an autonomous truck, predicting the weather, giving credit scores, measuring sentiment in text, or reading licence plates.

AI devices are starting to outperform humans in a small but growing number of narrow tasks such as these. They are getting better at driving, brick-laying, fruit-picking, burger-flipping, banking, medical diagnosis, trading and automated stock-taking. Legal software firms are developing statistical prediction algorithms to analyse past cases and recommend trial strategies. Automated tools to analyse CVs are now routinely used by companies to help them filter out obviously unsuitable candidates. Using complex data models, software can now make predictions on investment strategies. Every year the list of things humans are better at than machines gets a little shorter.

There are reasons to be excited about this, however. One afternoon I met Jeremy Howard, an AI researcher based in San Francisco, who has helped pioneer the use of machine learning to diagnose medical conditions from millions of CT scans. “I always thought of medicine as very human – human judgements, human decisions,” he told me. “But it’s also a data problem.”

Howard’s algorithms take pathology images, alongside lab results such as blood tests, genomics and patient histories, and detect patterns invisible to the human eye. His systems have repeatedly outperformed top radiologists. Given that medical diagnostic errors affect 12 million people a year in the US alone, I’d prefer a machine over a tired, overworked doctor.

As machines get smarter they will continually produce good, money-saving solutions compared to human decisions, which will further establish their importance in our lives. If a machine’s diagnosis is repeatedly better than a human doctor’s, it could be unethical to ignore the machine’s advice. A government with a machine advising it to allocate police resources to save money and cut crime would be hard to resist.

In fact, AI will become invaluable in the effort to solve the complicated and tangled challenges society faces: climate change, energy problems, hunger. Smart cities and urban planning will almost certainly be run by algorithm in the future, dramatically reducing traffic congestion. Sophisticated AI cyber-attacks of the future will require equally sophisticated AI-powered cyber-defences.

 

****

These wonderful advances will be impossible to resist. But they carry an implicit threat: that human decisions start looking increasingly irrational, inefficient and silly by comparison. As Yuval Noah Harari pointed out in his 2016 book Homo Deus, the emergence of smart machines poses an existential threat to the simple but important idea that humans are the best decision-makers on the planet. As an increasing number of important political and moral decisions are made by the cold and unbending logic of software, there will be profound consequences for our social and democratic order.

There are several reasons why this is a problem. The more decisions are made by algorithm, the less we mere mortals will really understand how or why they are made, and what to do if things go wrong.

An example of this was revealed in May when the then health secretary Jeremy Hunt told the House of Commons that “a computer algorithm failure” meant that 450,000 patients in England had missed breast cancer screenings. As many as 270 women might have had their lives shortened as a result (although some, such as the NS health columnist Phil Whitaker, have argued that the NHS’s approach to screening is itself flawed).

Overall, more lives will be saved than lost because of greater use of such machines. But we will lose the ability to hold people accountable. Who is responsible for the computer algorithm failure? The tech guy who wrote the software, in good faith, years ago? The person who commissioned it? The people feeding the data in? “It was the algorithm,” will, I expect, become the politicians’ favourite non-apology apology.

What’s more, although they appear to be objective, algorithms always start with a question framed by whoever programmes them. As a result, they sometimes reproduce the biases of their creators or of whatever data is loaded in.

Some police forces – especially in the US – rely on data models to decide where to put police officers. Crime tends to take place in poor (and therefore often ethnic minority-dominated) neighbourhoods, which means more police operate in those areas. Having more police around generally means more people in those neighbourhoods are arrested. The utilitarian bean-counters will clock this up as a success. But the data will continue to feed back into the model, creating a self-perpetuating loop of growing inequality and algorithm-driven racial targeting. And because it’s all so opaque – and because it will appear to cut crime – no one will be quite sure how to fix it.

In her recent book Weapons of Math Destruction, Cathy O’Neil documented dozens of instances where important decisions – relating to hiring policy, teacher evaluations, police officer distribution and more – were effectively outsourced to proprietary data and algorithms, even though such decisions have important moral dimensions and consequences. 

For example, take the proliferation of well-meaning apps designed to help you decide how to vote. You put in your views and preferences and the machine spits out a party for you. Nearly five million people in Britain have already used the voting app iSideWith in multiple elections. That so many asked an app – the workings of which they most likely do not understand – how to fulfil their most important duty as a citizen seems to have bothered very few.

Such opacity and complexity is often good for people in charge. It is unsurprising that the world’s autocrats have swiftly embraced big data, AI and connectivity. Xinjiang province in China is an example of what a hi-tech police state might look like – a kind of Singapore on steroids. A sophisticated infrastructure of surveillance and AI is being rolled out there, with the ostensible aim of suppressing separatist activity in this predominantly Muslim region. Facial recognition, location data, satellite tracking, and personal addresses are analysed to alert the authorities when a target strays 300 metres or more away from their home. Other software tries to predict and thwart terror attacks before they occur.

It’s easy to assume these echoes of the Hollywood film Minority Report only occur in autocratic regimes such as Xi Jinping’s China. But if it works, it will spread. In 2016 Ecuador’s police introduced facial recognition technology – and it is being credited with a large drop in crime. The Chinese company Yitu, which specialises in facial recognition tech, has several offices in Africa and is looking at setting up in Europe too. It’s not hard to speculate how the British authorities could use and then misuse this capability: first for terrorists, then criminals, then activists, and finally people not putting the recycling in the correct bin.

Big data and AI also create brand new, sometimes very subtle, forms of surveillance. The Chinese government is working on a “social credit system” to rate the trustworthiness of its 1.4 billion citizens, which will score people on every facet of their life: credit rating, adherence to the law, even “social sincerity”. The policy document states that this citizen score “will forge a public opinion environment where keeping trust is glorious”. I’m sure it will – because everyone will be terrified.

****

The wider question is what sort of political system will be best suited to the age of smart machines. For the last 200 years there has been a conveniently self-reinforcing cycle: individual freedom was good for the economy, and that economy produced more well-off people who valued freedom. But what if that link was weakened? What if economic growth in the future no longer depended on individual freedom and entrepreneurial spirit, but on capital and intellectual ownership of the smart machines that drove research, productivity and entrepreneurship?

There is no indication that a centrally planned, state-controlled economy can’t thrive in this new age. It might even be better than complicated, rights-obsessed democracies. The last few years suggest digital technology thrives perfectly well under monopolistic conditions: the bigger a company is, the more data and computing power it gets, and the more efficient it becomes; the more efficient it becomes, the more data and computing power it gets, in a self-perpetuating loop.

Similarly, Chinese firms enjoy access to huge amounts of data because of the size of the country’s population, with relatively few restraints on how that data can be used. There aren’t pesky privacy or data protection laws to slow them down, such as the new GDPR rules in Europe. Unlike Google, the Chinese tech conglomerate Alibaba has no internal ethics division.

This, incidentally, is why – for all the chest-beating in the US Senate over Facebook’s data-harvesting – excessively onerous regulation of tech companies probably won’t come to America. It is too afraid of falling behind. Both China and Russia have made it clear they want to lead the world in AI, and both are pouring billions of dollars into it. Vladimir Putin himself has said that whoever leads in AI will lead the world, and has compared Russia’s spending to the “State Commission for the Electrification of Russia”, which was Lenin’s 1920 prototype for the Soviet five-year plans.

Chinese companies have already enjoyed some major breakthroughs, mostly unreported by Western media (for example, Alibaba beating Microsoft by one day to supposedly better-than-human performance in speech recognition earlier this year). China could soon become the world leader in AI, which would also mean it could shape the future of the technology and the limits on how it is used.

For many decades, liberal democracies with strong individual freedoms have been the best at providing the things that most people care about: wealth, individual liberty, democratic accountability and stability. The emergence of smart machines and big data means this might not always be the case.

Research already suggests that democracy isn’t all that popular. A recent survey in the Journal of Democracy found that only 30 per cent of US millennials agree that “it’s essential to live in a democracy”, compared with 72 per cent of those born in the 1930s. The results in most other democracies demonstrate a similar pattern. These statistics won’t improve if Western liberal democracy can’t solve our most pressing issues or devise the policies people demand. In future we might even start to look enviously at the alternatives – whether that is predictive policing, data-led decisions or even state-owned AI.

However, I don’t think the idea of democracy will disappear, especially in an age where everyone has a voice and a platform. We could even still have plebiscites and MPs and the rest. But it would be little more than a shell system, where real power and authority was increasingly centralised towards a small group of techno-wizards and where decisions were taken by “black box” AIs whose reasoning is inevitably opaque.

This would be less a tumble into dystopia, than a damp and weary farewell to an outdated political system. The worst part is that if a “machinocracy” was able to deliver wealth, prosperity and stability, many people would probably be perfectly happy with it

Jamie Bartlett is director of the Centre for the Analysis of Social Media and the author of “The People vs Tech: How the internet is killing democracy (and how we save it)”

Content from our partners
The Circular Economy: Green growth, jobs and resilience
Water security: is it a government priority?
Defend, deter, protect: the critical capabilities we rely on

This article appears in the 15 Aug 2018 issue of the New Statesman, The inside story of Mossad