New Times,
New Thinking.

  1. Long reads
26 February 2020updated 27 Feb 2020 3:28pm

Slouching towards dystopia: the rise of surveillance capitalism and the death of privacy

Our lives and behaviour have been turned into profit for the Big Tech giants – and we meekly click “Accept”. How did we sleepwalk into a world without privacy?

By John Naughton

Suppose you walk into a shop and the guard at the entrance records your name. Cameras on the ceiling track your every step in the store, log which items you looked at and which ones you ignored. After a while you notice that an employee is following you around, recording on a clipboard how much time you spend in each aisle. And after you’ve chosen an item and bring it to the cashier, she won’t complete the transaction until you reveal your identity, even if you’re paying cash. 

Another scenario: a stranger is standing at the garden gate outside your house. You don’t know him or why he’s there. He could be a plain-clothes police officer, but there’s no way of knowing. He’s there 24/7 and behaves like a real busybody. He stops everybody who visits you and checks their identity. This includes taking their mobile phone and copying all its data on to a device he carries. He does the same for family members as they come and go. When the postman arrives, this stranger insists on opening your mail, or at any rate on noting down the names and addresses of your correspondents. He logs when you get up, how long it takes you to get dressed, when you have meals, when you leave for work and arrive at the office, when you get home and when you go to bed, as well as what you read. He is able to record all of your phone calls, texts, emails and the phone numbers of those with whom you exchange WhatsApp messages. And when you ask him what he thinks he’s doing, he just stares at you. If pressed, he says that if you have nothing to hide then you have nothing to fear. If really pressed, he may say that everything he does is for the protection of everyone.

A third scenario: you’re walking down the street when you’re accosted by a cheery, friendly guy. He runs a free photo-framing service – you just let him copy the images on your smartphone and he will tidy them up, frame them beautifully and put them into a gallery so that your friends and family can always see and admire them. And all for nothing! All you have to do is to agree to a simple contract. It’s 40 pages but it’s just typical legal boilerplate – the stuff that turns lawyers on. You can have a copy if you want. You make a quick scan of the contract. It says that of course you own your photographs but that, in exchange for the wonderful free framing service, you grant the chap “a non-exclusive, transferable, sub-licensable, royalty-free and worldwide licence to host, use, distribute, modify, copy, publicly perform or display, translate and create derivative works” of your photos. Oh, and also he can change, suspend, or discontinue the framing service at any time without notice, and may amend any of the agreement’s terms at his sole discretion by posting the revised terms on his website. Your continued use of the framing service after the effective date of the revised agreement constitutes your acceptance of its terms. And because you’re in a hurry and you need some pictures framed by this afternoon for your daughter’s birthday party, you sign on the dotted line.

All of these scenarios are conceivable in what we call real life. It doesn’t take a nanosecond’s reflection to conclude that if you found yourself in one of them you would deem it preposterous and intolerable. And yet they are all simple, if laboured, articulations of everyday occurrences in cyberspace. They describe accommodations that in real life would be totally unacceptable, but which in our digital lives we tolerate meekly and often without reflection.
The question is: how did we get here? 

***

It’s a long story, but with hindsight the outlines are becoming clear. Technology comes into it, of course – but plays a smaller part than you might think. It’s more a story about human nature, about how capitalism has mutated to exploit digital technology, about the liberal democratic state and the social contract, and about governments that have been asleep at the wheel for several decades.

To start with the tech: digital is different from earlier general-purpose technologies in a number of significant ways. It has zero marginal costs, which means that once you have made the investment to create something it costs almost nothing to replicate it a billion times. It is subject to very powerful network effects – which mean that if your product becomes sufficiently popular then it becomes, effectively, impregnable. The original design axioms of the internet – no central ownership or control, and indifference to what it was used for so long as users conformed to its technical protocols – created an environment for what became known as “permissionless innovation”. And because every networked device had to be identified and logged, it was also a giant surveillance machine.

Give a gift subscription to the New Statesman this Christmas from just £49

Since we humans are social animals, and the internet is a communications network, it is not surprising we adopted it so quickly once services such as email and web browsers had made it accessible to non-techies. But because providing those services involved expense – on servers, bandwidth, tech support, etc – people had to pay for them. (It may seem incredible now, but once upon a time having an email account cost money.) Then newspaper and magazine publishers began putting content on to web servers that could be freely accessed, and in 1996 Hotmail was launched (symbolically, on 4 July, Independence Day) – meaning that anyone could have email for free. 
Hotmail quickly became ubiquitous. It became clear that if a business wanted to gain those powerful network effects, it had to Get Big Fast; and the best way to do that was to offer services that were free to use. The only thing that remained was finding a business model that could finance services growing at exponential rates and provide a decent return for investors.


Spreading the word: Sheryl Sandberg brought Google’s business model to Facebook in 2008. Credit: Facebook

That problem was still unsolved when Google launched its search engine in 1998. Usage of it grew exponentially because it was manifestly better than its competitors. One reason for its superiority was that it monitored very closely what users searched for and used this information to improve the algorithm. So the more that people used the engine, the better it got. But when the dot-com bubble burst in 2000, Google was still burning rather than making money and its two biggest venture capital investors, John Doerr of Kleiner Perkins and Michael Moritz of Sequoia Capital, started to lean on its founders, Larry Page and Sergey Brin, to find a business model.

Under that pressure they came up with one in 2001. They realised that the data created by their users’ searches could be used as raw material for algorithms that made informed guesses about what users might be interested in – predictions that could be useful to advertisers. In this way what was thought of as mere “data exhaust” became valuable “behavioural surplus” – information given by users that could be sold. Between that epiphany and Google’s initial public offering in 2004, the company’s revenues increased by over 3,000 per cent.

Thus was born a new business model that the American scholar Shoshana Zuboff later christened “surveillance capitalism”, which she defined as: “a new economic order that claims human experience as the raw material for hidden commercial practices of extraction, prediction and sales”. Having originated at Google, it was then conveyed to Facebook in 2008 when a senior Google executive, Sheryl Sandberg, joined the social media giant. So Sandberg became, as Zuboff puts it, the “Typhoid Mary” who helped disseminate surveillance capitalism.

***

The dynamic interactions between human nature and this toxic business model lie at the heart of what has happened with social media. The key commodity is data derived from close surveillance of everything that users do when they use these companies’ services. Therefore, the overwhelming priority for the algorithms that curate users’ social media feeds is to maximise “user engagement” – the time spent on them – and it turns out that misinformation, trolling, lies, hate-speech, extremism and other triggers of outrage seem to achieve that goal better than more innocuous stuff. Another engagement maximiser is clickbait – headlines that intrigue but leave out a key piece of information. (“She lied all her life. Guess what happened the one time she told the truth!”) In that sense, social media and many smartphone apps are essentially fuelled by dopamine – the chemical that ferries information between neurons in our brains, and is released when we do things that give us pleasure and satisfaction. 

The bottom line is this: while social media users are essential for surveillance capitalism, they are not its paying customers: that role is reserved for advertisers. So the relationship of platform to user is essentially manipulative: he or she has to be encouraged to produce as much behavioural surplus as possible. 

A key indicator of this asymmetry is the End User Licence Agreement (EULA) that users are required to accept before they can access the service. Most of these “contracts” consist of three coats of prime legal verbiage that no normal human being can understand, and so nobody reads them. To illustrate the point, in June 2014 the security firm F-Secure set up a free WiFi hotspot in the centre of London’s financial district. Buried in the EULA for this “free” service was a “Herod clause”: in exchange for the WiFi, “the recipient agreed to assign their first born child to us for the duration of eternity”. Six people accepted the terms.  In another experiment, a software firm put an offer of an award of $1,000 at the very end of its terms of service, just to see how many would read that far. Four months and 3,000 downloads later, just one person had claimed the offered sum. 

Despite this, our legal systems accept the fact that most internet users click  “Accept” as confirmation of informed consent, which it clearly is not. It’s really passive acceptance of impotence. Such asymmetric contracts would be laughed out of court in real life but are still apparently sacrosanct in cyberspace. 

***

According to the security guru Bruce Schneier of Harvard, “Surveillance is the business model of the internet.” But it’s also a central concern of modern states. When Edward Snowden broke cover in the summer of 2013 with his revelations of the extensiveness and scale of the surveillance capabilities and activities of the US and some other Western countries, the first question that came to mind was: is this a scandal or a crisis? Scandals happen all the time in democracies; they generate a great deal of heat and controversy, but after a while the media caravan moves on and nothing happens. Crises, on the other hand, do lead to substantive reform.

Snowden revealed that the US and its allies had been engaged in mass surveillance under inadequate democratic oversight. His disclosures provoked apparent soul-searching and anger in many Western democracies, but the degree of public concern varied from country to country. It was high in Germany, perhaps because so many Germans have recent memories of Stasi surveillance. In contrast, public opinion in Britain seemed relatively relaxed: opinion surveys at the time suggested that about two-thirds of the British public had confidence in the security services and were thus unruffled by Snowden. Nevertheless, there were three major inquiries into the revelations in the UK, and, ultimately, a new act of parliament – the Investigatory Powers Act 2016. This overhauled and in some ways strengthened judicial oversight of surveillance activities by the security services; but it also gave those services significant new powers  – for example in “equipment interference”  (legal cover to hack into targeted devices such as smartphones, domestic networks and “smart” devices such as thermostats). So, in the end, the impact of the Snowden revelations was that manifestly inadequate oversight provisions were replaced by slightly less inadequate ones. It was a scandal, not a crisis. Western states are still in the surveillance business; and their populations still seem comfortable with this.

There’s currently some concern about facial recognition, a genuinely intrusive surveillance technology. Machine-learning technology has become reasonably good at recognising faces in public places, and many state agencies and private companies are already deploying it. It means that people are being identified and tracked without their knowledge or consent. Protests against facial recognition are well-intentioned, but, as Harvard’s Bruce Schneier points out, banning it is the wrong way to oppose modern surveillance.  


All-seeing eye: CCTV in Westminster, London

This is because facial recognition is just one identification tool among many enabled by digital technology. “People can be identified at a distance by their heartbeat or by their gait, using a laser-based system,” says Schneier. “Cameras are so good that they can read fingerprints and iris patterns from metres away. And even without any of these technologies, we can always be identified because our smartphones broadcast unique numbers called MAC addresses. Other things identify us as well: our phone numbers, our credit card numbers, the licence plates on our cars. China, for example, uses multiple identification technologies to support its surveillance state.”

The important point is that surveillance and our passive acceptance of it lies at the heart of the dystopia we are busily constructing. It doesn’t matter which technology is used to identify people: what matters is that we can be identified, and then correlated and tracked across everything we do. Mass surveillance is increasingly the norm. In countries such as China, a surveillance infrastructure is being built by the government for social control. In Western countries, led by the US, it’s being built by corporations in order to influence our buying behaviour, and is then used incidentally by governments.

What’s happened in the West, largely unnoticed by the citizenry, is a sea-change in the social contract between individuals and the state. Whereas once the deal was that we accepted some limitations on our freedom in exchange for security, now the state requires us to surrender most of our privacy in order to protect us. The (implicit and explicit) argument is that if we have nothing to hide there is nothing to fear. And people seem to accept that ludicrous trope. We have been slouching towards dystopia.

***

The most eerie thing about the last two decades is the quiescence with which people have accepted – and adapted to – revolutionary changes in their information environment and lives. We have seen half-educated tech titans proclaim mottos such as “Move fast and break things” – as Mark Zuckerberg did in the early years of Facebook – and then refuse to acknowledge responsibility when one of the things they may have helped to break is democracy.  (This is the same democracy, incidentally, that enforces the laws that protect their intellectual property, helped fund the technology that has enabled their fortunes and gives them immunity for the destructive nonsense that is disseminated by their platforms.) And we allow them to get away with it.

What can explain such indolent passivity? One obvious reason is that we really (and understandably) value some of the services that the tech industry has provided. There have been various attempts to attach a monetary value to them, but any conversation with a family that’s spread over different countries or continents is enough to convince one that being able to Skype or FaceTime a faraway loved one is a real boon. Or just think of the way that Google has become a memory prosthesis for humanity – or how educational non-profit organisations such as the Khan Academy can disseminate learning for free online. 

We would really miss these services if they were one day to disappear, and this may be one reason why many politicians tip-toe round tech companies’ monopoly power. That the services are free at the point of use has undermined anti-trust thinking for decades: how do you prosecute a  monopoly that is not price-gouging its users? (The answer, in the case of social media, is that users are not customers;  the monopoly may well be extorting its actual customers – advertisers – but nobody seems to have inquired too deeply into that until recently.)

Another possible explanation is what one might call imaginative failure – most people simply cannot imagine the nature of the surveillance society that we are constructing, or the implications it might have for them and their grandchildren. There are only two cures for this failure: one is an existential crisis that brings home to people the catastrophic damage that technology could wreak. Imagine, for example, a more deadly strain of the coronavirus that rapidly causes a pandemic – but governments struggle to control it because official edicts are drowned out by malicious disinformation on social media. Would that make people think again about the legal immunity that social media companies enjoy from prosecution for content that they host on their servers? 

The other antidote to imaginative failure is artistic creativity. It’s no accident that two of the most influential books of the last century were novels – Orwell’s Nineteen Eighty-Four (1949) and Aldous Huxley’s Brave New World (1932). The first imagined a world in which humans were controlled by fear engendered by comprehensive surveillance; the second portrayed one in which citizens were undone by addiction to pleasure – the dopamine strategy, if you like. The irony of digital technology is that it has given us both of these nightmares at once.

Whatever the explanation, everywhere at the moment one notices a feeling of impotence – a kind of learned helplessness. This is seen most vividly in the way people shrug their shoulders and click “Accept” on grotesquely skewed and manipulative  EULAs. They face a binary choice: accept the terms or go away. Hence what has become known as the “privacy paradox” – whenever researchers and opinion pollsters ask internet users if they value their privacy, they invariably respond with a  resounding “yes”. And yet they continue to use the services that undermine that beloved privacy.

It hasn’t helped that internet users have watched their governments do nothing about tech power for two decades. Surveillance capitalism was enabled because its practitioners operated in a lawless environment. It appropriated people’s data as a free resource and asserted its right to do so, much as previous variations of capitalism appropriated natural resources without legal restrictions. And now the industry claims as one of its prime proprietary assets the huge troves of that appropriated data that it possesses. 

It is also relevant that tech companies have been free to acquire start-ups that threatened to become competitors without much, if any, scrutiny from competition authorities. In any rational universe, Google would not be permitted to own YouTube, and Facebook would have to divest itself of WhatsApp and Instagram. It’s even possible – as the French journalist  Frédéric Filloux has recently argued – that  Facebook believes its corporate interests are best served by the re-election of Donald Trump, which is why it’s not going to fact-check any political ads. As far as I can see, this state of affairs has not aroused even a squawk in the US.

When Benjamin Franklin emerged on the final day of deliberation from the Constitutional Convention of 1787, a woman asked him, “Well Doctor, what have we got, a republic or a monarchy?” To which Franklin replied, “A republic… if you can keep it.” The equivalent reply for our tech-dominated society would be: we have a democracy, if we can keep it. 

Content from our partners
"Time to bring housebuilding into the 21st century"
For building best practice? Look North
Where does the Budget leave housebuilding?

This article appears in the 26 Feb 2020 issue of the New Statesman, The death of privacy