New Times,
New Thinking.

  1. Politics
  2. The Staggers
17 November 2016updated 09 Sep 2021 2:12pm

Is polling dead?

The fault is not in our pollsters, but how the media covers them. 

By Will Jennings

Donald Trump is President-Elect of the United States. Britain is leaving the European Union. Ed Miliband’s not Prime Minister. The past eighteen months have left the reputation of both pollsters and poll aggregators in tatters. Failure of the polls in Britain led to calls for regulation. There is talk they should be abandoned altogether in favour of alternative sources of data, or that journalists’ time would be better spent going to speak to people in their own communities. 

The polling misses in the United Kingdom and United States are damaging not just because they have misled people about the likely outcome of the race. They also add fuel on the fire in an age in which experts are mocked. Pollsters – and the pundits that use them – are yet another elite to be put back in their place. To populists, the unpredictability of voters is another feather in their cap. Can polling survive its recent troubles?

There should be no doubt that the vast majority of pollsters are highly skilled and diligent in what they do. Every pollster that I have ever met cares about getting the election result right. Even setting aside motives of professional honour, there are overriding commercial incentives for polling firms to not go down in flames in the most publicly visible showcasing of their product.

Polling is getting harder. Pollsters face declining response rates to their surveys, with respondents increasingly hard to reach. Electorates across many advanced democracies are also undergoing substantial changes – with voters becoming more volatile and more detached from the parties that they have traditionally supported, and with issue and identity-based politics giving rise to new electoral coalitions. This creates a fast-moving target for pollsters from election to election. Yet there is little empirical evidence that polling errors are actually larger today than they used to be (if anything, data that I have collected of nearly 30,000 polls in 45 countries since the 1940s seems to indicate the reverse).

In polling for the presidential election, the national polls were actually only off by a couple of points and correctly picked Hillary Clinton as the winner of the popular vote (some political science forecasts also pointed to a close race or a Trump victory). What mattered was the misses in state polling (which had the added effect of making the forecasts of poll aggregators seem much more certain than they should have been).

But errors were not evenly spread across the battleground states. The more whites without degrees in a state, the greater the under-estimation of Trump’s share of the vote. This points to the turnout models having been badly wrong in predicting who would vote. The problem was that errors in state polling contributed to over-confident predictions of the outcome by poll aggregators. But this was not the pollsters’ fault, just as the Brexit polls often showed Leave in the lead but this was overlooked by many commentators.

Throwing in the towel is not the solution. At a time when our politics is increasingly fractured and ghettoised, by geography and in filter bubbles online, we need systematic measures of public opinion. Vox pops on street corners or focus groups may provide us rich insights into the way in which small numbers of voters are thinking about issues. But how can we know these attitudes extend to the public more generally? Indeed, survey research, including commercial polling, has pointed to many of the trends underlying Brexit and the rise of Trump for some time.

Give a gift subscription to the New Statesman this Christmas from just £49

All polls are subject to sampling error and potential for bias. This makes the estimation of precise ‘point estimates’ of a vote percentage much more difficult – and this is where things are more likely to go wrong. Yet news coverage dominantly favours reporting of stories about who is up or down in the polls. Demand for this sort of polling is driven by media and their readers or viewers. Yet most polls are presented without caveat. Even the widely referred to ‘margin of error’ attached to polls is misleading. The margin of error is only applicable to random sampling error, and none of the opinion polls reported in the media today could be described as pure random probability polls (they are typically stratified samples of some sort). Instead, error is a function of the variety of weighting adjustments that are made to poll samples. Often the distribution of uncertainty around polls is not calculated. The tendency of commentators to talk about margin of error means that other sources of error and bias are often overlooked. It is not pollsters’ fault that there is such widespread ignorance about the elementary statistical theory behind polling. If polls were digested and reported in a more careful way, there would not be such surprise when they go wrong. Of course they would make for less interesting news stories too.

Polls still provide us with valuable information, if we can accept that they sometimes may be off by a matter of a few percentage points. Polling itself should have encouraged much more scepticism about the outcome of the presidential election. Thanks to the polls, everyone knew this was not a normal race. Trump and Clinton had the highest unfavourable ratings of any candidates in history. This alone should have suggested that normal rules might not apply, or that the behaviour of voters would be less predictable than usual.

Blaming pollsters is the easy way out. We need to become better, more informed and more critical consumers of polls.

Will Jennings is Professor of Political Science and Public Policy at the University of Southampton

Content from our partners
Building Britain’s water security
How to solve the teaching crisis
Pitching in to support grassroots football