New Times,
New Thinking.

  1. Science & Tech
13 June 2012updated 08 Jul 2021 9:39am

The tech industry’s disregard for privacy relies on the consent of its customers

By Dorian Lynskey

Recently, I was out in east London with some friends, looking for somewhere to drink before dinner. We made tracks for a nearby bar that we remembered fondly but hesitated when we saw that there were now facial-recognition scanners at the door. Half of us had seen them in bars before and thought they were no big deal. The other half (myself included) regarded this as some dystopian bullshit. We went somewhere else.

I’ve always considered myself too lazy to be paranoid and I’ve probably been as cavalier as the next person about surrendering my personal data to tech companies with little thought as to how it might be used. Working on a book about Nineteen Eighty-Four and its privacy-obsessed author for the last two years, however, seems to have made me leery. It strikes me as outrageous to have your face scanned just to get a drink. I declined an upgrade to an iPhone X because of the facial recognition software. I dislike cashless coffee shops. I’ve resisted getting a virtual assistant because I’d rather order my own milk and choose my own playlists than pay a corporation to surveil me. Am I becoming paranoid or just sensible? Technology evolves so fast that innovations that would have seemed implausibly intrusive five years ago are now so commonplace that it can seem alarmist to reject them, but they are not mandated by law. We have choices and it feels good to make them.

The tech industry gives the impression that it is run by people who have never read (or at least never understood) dystopian fiction. Remember last year when Google’s Sundar Pichai used an AI to fool a restaurant booker into thinking she was speaking to a human being, and the audience laughed and clapped as if it were a cool new toy rather than a post-truth nightmare waiting to happen? The truth is worse. The denizens of Silicon Valley are well aware of the dangers of the products they design. Tech CEOs go on digital detox retreats and withheld devices from their children for as long as possible, and it’s no coincidence that the first city in the US to outlaw the use of facial recognition by the police and municipal agencies is San Francisco. Workers in the tech industry know that their products can make mistakes and can be vulnerable to hackers. They know where the data goes. And they don’t want to get high on their own supply.

Last December I attended an HG Wells lecture by Dave Eggers, author of the bestselling Silicon Valley dystopia The Circle, and left feeling that I was nowhere near paranoid enough. Inspired by Wells’s pioneering manifestos for human rights in the 1940s, Eggers called for a Declaration of Digital Rights to restore some balance to the digital world but said it would never come about unless people demanded one. “We are entirely the problem,” he said affably. “Nothing is being done to us that compares to what we have done to ourselves. We have walked straight into every horrible consequence of our current technological dystopia, and with open eyes. All the warnings have been out there for a long time, but regardless, a thousand times over, we have opted in.”

That is the bottom line: we cannot plead ignorance. Operating a variety of doublethink, we claim that we are concerned about privacy yet act as if we couldn’t care less. We read The Circle and watch Black Mirror and valorise Edward Snowden but do little to moderate our behaviour. As we have seen recently, many people struggle with basic safeguards like deleting troublesome old tweets; most of what is called “offence archaeology” requires no more digging ability than that of a child with a bucket and spade. If people are this careless on a public platform even when they have a great deal to lose, what chance is there of them thinking through the implications of online activity that feels private and trivial? We are also complicit on the level of product development. Facial recognition software has been finessed thanks to millions of Facebook users tagging their friends in photographs. Surely we have to draw the line somewhere.

For Btihaj Ajana, senior lecturer in the department of digital humanities at King’s College London and author of Governing Through Biometrics: The Biopolitics of Identity, it was a Fitbit. Six years ago, she bought the device to encourage her to exercise but she soon began asking questions about where her data went (the cloud) and who owned it (Fitbit, Inc.). After three months she consigned it to a drawer “as a form of resistance”, and it’s been there ever since. Recently, her local gym replaced membership cards with facial-recognition scanners at the turnstiles and she refused to use them.

“The internet and mobile devices aren’t just consumer spaces, they’re also political spaces,” Ajana tells me. “Treating them like that makes us more aware of what can be done with our data.” The consequences of constant surveillance, she says, can be profound: “If you are aware that you are being monitored all the time then what you think and how you behave gets affected. You censor yourself. You don’t allow yourself to think in a certain way or to say certain things because you feel that the repercussions would be severe. So it threatens our very ability to be a human being with free will.” Politically, we may be a long way from China’s surveillance state and imminent “social credit system” but the technology is there and I find it increasingly hard to regard it as harmlessly benign.

Give a gift subscription to the New Statesman this Christmas from just £49

The tech industry’s disregard for privacy is not an inevitable consequence of the technology but the product of a particular economic model combined with a glib utopianism, and it relies on the consent of its customers. If everybody avoided bars, gyms and airlines with facial recognition scanners, they would be forced to remove them. Liberty’s Martha Spurrier has said that San Francisco’s ban proves “that we can have the moral imagination to say, sure, we can do that, but we don’t want it.” That moral imagination — the simple willingness to say no — is in short supply. Ajana points to GDPR legislation: it was introduced to hand power back to internet users but still people are inclined to see the warning page as an irritant and quickly click “Accept All” to make it disappear.

It is hard to avoid police-operated facial-recognition scanners, which are being challenged in court for the first time by Liberty. It is effectively impossible to steer clear of CCTV cameras, of which the UK has the greatest density in the world. Not everybody is inclined to bin their smartphones or plunge into the underworld of the dark web. “We don’t have many choices in between,” says Ajana. “Either you completely subject yourself to data sharing or you have to give up on using those devices altogether.” But there are still simple measures that we can take to protect our privacy, provided that we care enough.

In 2013, Ajana collaborated with the Headlong theatre company on an app to accompany Robert Icke and Duncan Macmillan’s stage version of Nineteen Eighty-Four. Ticket-buyers were invited (but not obliged) to sign up to the Digital Double app, whose terms and conditions clearly stated that it would harvest images from their social media accounts and project them in the foyer on the night of the show. Of course, the point of the exercise was that most people don’t read the terms and conditions. Ajana expected that, even though it was perfectly legal, people would be unnerved, and even outraged, by the broadcasting of their data. Once the play’s run began, however, she was surprised to find that they didn’t mind at all. When they saw the photographs they had given away without realising it, they were delighted. They had accepted all.

Content from our partners
Building Britain’s water security
How to solve the teaching crisis
Pitching in to support grassroots football