New Times,
New Thinking.

  1. Ideas
  2. Agora
21 November 2021

The political risks of Big Data dominance

Big Data's hubristic claim that it understands humanity opens the door to dangerous manipulation.

By Firmin DeBrabander

Big Data has big things in store for us. The burgeoning industry devoted to collecting and analysing our every digital emission, no matter how minute or mundane, believes it has discovered the key to reading us, and predicting, if not prompting, our behaviour.

Such ambitions are not new. Political leaders and researchers throughout human history have thought they cracked the human code and could program us at will. So, what is different now? Why should we believe Big Data has figured us out? And even if the data analysts are wrong, what should we make of their hopes and designs? And what should we fear?

Data analysis is an esoteric science whose methods and conclusions are inscrutable to us. To cite a famous example, data analysts working for the US retailer Target deduced that particular female customers were pregnant by analysing their purchases of specific goods, including vitamins, lotions and cotton balls. Target’s analysts were so astute that they could predict the woman’s due date to within a week.

Facebook’s data analysts, meanwhile, know when we are falling in love or breaking up. Through careful study, they determined that “couples about to be ‘official’ will post…1.67 times per day in the 12 days before they publicly change their profile to ‘in a relationship’. The number of posts then falls to 1.53 posts per day in the next 85 days… [While] the number of interactions drops as the relationship starts, there’s also an uptick in the level of positivity. This includes the use of the words like love, nice, happy, and…[subtracting] negative words like hate, hurt and bad.”

In another alarming example, surveillance scholar Shoshana Zuboff explains in The Age of Surveillance Capitalism how online lenders deploy data analysis to determine creditworthiness. Through “detailed mining of an individual’s smartphone and other online behaviours”, they extract salient data, which includes “the frequency with which you charge your phone battery, the number of incoming messages you receive, if and when you return phone calls, how many contacts you have listed in your phone, how you fill out online forms, or how many miles you travel each day”. How do they make sense of this data? It’s hard to say.

It is clear, however, that data analysts aim to uncover our vulnerabilities. Why else would Facebook want to know if we are falling in love or breaking up? We are especially irrational or pliable in those states, and advertisers – Facebook’s real clients – would love to be aware of that. And when armed with our intimate information, analytically savvy advertisers may influence our behaviour and turn us into the customers they have always wanted us to be.

Zuboff suggests that Big Data is enamoured with the thinking of 20th-century behavioural psychologist BF Skinner. Skinner harboured controversial views, such as the notion that knowledge and freedom are contrary to one another: our actions only seem free so long as their causes and motivations are not understood; when we are fully understood, we will see that our actions are perfectly predictable, and our freedom illusory. In fact, Skinner believed that the notion of an “autonomous man” obstructs our rational future and stifles our progress. The rational future is technocracy, where choice in key matters is taken out of the hands of errant individuals and vested in experts who know us, read us, and understand what we truly need.

Give a gift subscription to the New Statesman this Christmas from just £49

Skinner’s convictions and aspirations are reminiscent of a distinctive strain of rationalism that the conservative philosopher Michael Oakeshott detected in 20th-century political thinking. This rationalism, Oakeshott explains, combines a “politics of perfection” and a “politics of uniformity”. Specifically, rationalists believe that political problems can be solved by ensuring that political institutions correspond with an ideal form of government. And instead of drawing on history and experience to deal with political conflicts, rationalists rely on their technical understanding of human nature and society.

[see also: Who wants to live forever? Big Tech and the quest for eternal youth]

According to the rationalist approach, humans ultimately need to be purged of the habits that hold them back and then reprogrammed to achieve the model political community. Oakeshott’s account of rationalism captures the mindset behind Stalin’s industrial policies and Mao’s Cultural Revolution. And tellingly, the USSR and the Chinese Communist Party each sought to erase tradition and radically renew society by forcing their citizens and institutions to conform with a political ideal.

History, however, has shown that the search for and supposed implementation of human perfection and uniformity is a recipe for bloodshed. As the eminent intellectual historian Isaiah Berlin famously put it, humanity is made of “crooked timber”. We diverge in countless ways, some remarkable, some minute, and s “forc[ing] people into the neat uniforms demanded by dogmatically believed-in schemes is almost always the road to inhumanity”.

What’s more, presuming to understand humanity is itself a violent act. It is a kind of conquest that indicates the hubris and danger of political rationalists. To cite the late Donald Rumsfeld, defence secretary under George W Bush, there are “unknown unknowns” in the human psyche. And when leaders claim to grasp the human condition, freedom and diversity are easily sacrificed for the sake of a greater vision. In fact, it is this kind of sacrifice that Stalin and Mao demanded – packaged under the seemingly benign, bureaucratic title of “central planning” – and their technocratic social experiments caused untold suffering.

What does this tell us about Big Data? What does its dubious intellectual lineage portend? With the help of artificial intelligence (AI), data analysts lay claim to ever more of your soul. Researchers have deployed AI for diagnosing mental health, for example, by listening to the human voice, and analysing its tone, pitch and volume. This is ingenious, and very helpful for people who don’t have the option of visiting a therapist.

But if you are as predictable as the data analysts claim, and if data analysts are vested with great power, they may be tempted to use this technology for less admirable ends. In fact, one company now offers AI mental health technology to telemarketers, ostensibly so that they can better empathise with customers, but it could also be used to help lure them in. This would be a devious use of a technology designed to detect when people are at their most vulnerable.

Data analysts enamoured by their own talents expand the bounds of experimentation. Facebook, which knows when we are falling in love, has developed techniques for influencing our moods by exposing us to select posts and ads. It has also deployed its algorithms to boost voter turnout – a feat that was subsequently emulated by Cambridge Analytica in 2016.

Fundamentally, data analysis is focused on detecting our every need and want – before we’re even aware of them ourselves. Analysts have become very good at this, and thus have enabled advertisers to serve us better. But there is a danger in the science of Big Data.

Like political rationalists, data analysts may think they know best. When we are presented as a set of data points that can be pushed and prodded by the expert analyst, we run the risk of being objectified and having our autonomy undermined. And this means Big Data opens the door to gross inhumanity.

Yet despite this worrisome similarity, there is a key difference between rationalism in politics and Big Data. Unlike the technocrats who served under Stalin and Mao, data analysts do not have a monopoly on political power. This means we have an opportunity to enact government legislation to limit the power of Big Data and mitigate its potential for abuse.

We can, for example, restrict the uses to which analysts apply their insights. Sensitive information, such as whether we suffer from anxiety or depression, should be revealed to medical professionals only. Privacy regulations which inform consumers about the data sought and how it will be used may be helpful. Tech firms that meddle in elections should be severely punished. And antitrust legislation targeting the tech industry could dismantle Big Data companies, and whittle them down to a more manageable size.

One thing, however, is clear: we cannot count on Big Data to admit its own fallibility and hold its ambitions in check. 

Firmin DeBrabander is Professor of Philosophy at the Maryland Institute College of Art. He is the author of “Life After Privacy” (Cambridge University Press).

This article is part of the Agora series, a collaboration between the New Statesman and Aaron James Wendland. Wendland is Vision Fellow in Public Philosophy at King’s College, London and a Senior Research Fellow at Massey College, Toronto. He tweets @aj_wendland.

Content from our partners
Building Britain’s water security
How to solve the teaching crisis
Pitching in to support grassroots football

Topics in this article :