After every election comes a period of mournful soul-searching for those who lost. Unusually, however, this time the group of losers includes not just defeated parties and their crestfallen leaders, but also a class of people who normally remain untouched by the vagaries of political fortune: opinion pollsters. For better or (as it turned out) for worse, this has arguably been the first ‘opinion poll election’. More polls were conducted than ever before, in more granular detail than ever before, and more attention than ever before was paid to how each shift and shimmy might translate into parties’ Westminster seat shares. Coalition flirtations, threats, and accusations dominated the ‘short campaign’—driven almost exclusively by the apparent ‘truth’ revealed by the polls, that a hung parliament was inevitable. As the public opinion scholar Lindsay Rogers might have put it, vox pollsteri seemed to have become synonymous with vox populi, and (equally inexorably) vox dei.
And yet, just as in 2010, as what looked initially like a wildly implausible exit poll was proved agonisingly (for the left), spectacularly (for the right) accurate over the course of the night, weeks’ worth of simulations and poll aggregations were seemingly discredited in the blink of an eye. The British public delivered a result that wholly confounded (nearly) all predictions—and the polls went from deified to vilified in the space of less than 24 hours, while John Curtice and his team of exit pollsters reigned supreme. Worse, that most sacred of things, the pollsters’ professional neutrality, was called seriously into question, and the British Polling Council took the remarkable step of setting up an independent enquiry into structural bias towards Labour and against the Conservatives within the industry. By any lights, a truly dramatic reversal—and one that raises the very real threat of a Pollsterdämmerung, a ‘twilight of the pollsters’.
In the corridors of power, the knives are out—as they already were well before the election—with indignant elites arming for a showdown with the polling industry. As in 1992, there have been calls to ban polls in the weeks before polling day, with the added sting of corruption accusations thrown in for good measure. It goes without saying that restricting opinion polling in this way is an overreaction, and an extremely dangerous route to take. Polls are a force for accountability in a democracy, as a regular reminder of the immense power that lies in the population, and of the political weight which its endorsement or rejection of an issue, party, or candidate carries. Without them, the ways to keep government informed of public opinion are reduced to rallies and protests, the media, or occasional fortuitous moments of personal contact. So rather than seeking to remove the influence of polls from our electoral cycle—which only benefits the powerful—the priority (as in 1992) should be to fix their problems. And the good news is that there are plenty of options for doing so.
One of these is what opinion researchers call ‘priming’, where pollsters ask the people they survey to think about specific important factors before asking them their voting intention. It is an inconvenient truth among both academics and professional pollsters that while polls can try to predict voting behaviour, they can never hope to explain it. This means that when polling companies contact their samples, their questions are not supposed to find out why people vote the way they do, but instead replicate how the average person might think if they were in the voting booth at that very moment. The effectiveness of ‘priming’ may have been been shown by Labour’s private polling in the run-up to the election, which asked respondents to “think about the country, the economy, their top issues, the parties and the leaders” before saying how they would vote, and produced Conservative and Labour relative vote shares that were far closer to the eventual outcome. Opinion polls in future must use such approaches much more. The focus on party leaders is especially important, and not just because of the rise of TV debates. Party elites personify what parties stand for, ideologically and strategically—just compare the mental pictures conjured up by ‘Tony Blair’s Labour’ versus ‘Ed Miliband’s Labour’—which factors into voters’ decisions even when they are primarily choosing between candidates for their constituency MPs.
But a far more sweeping reform concerns how pollsters measure the level of support parties enjoy within the population. Nothing can replace the situation of actually being in a voting booth, so even the carefully-framed forced choice of ‘voting intentions’ may really reflect much vaguer feelings of sympathy or allegiance towards the parties. What is needed is a way to determine how ‘intense’ support for different parties is—going beyond questions of ‘likelihood to vote’ (code for notoriously asymmetric turnout between party supporters) and the controversial exclusion or reallocation of ‘don’t knows’. Rather than simply asking respondents whether they support the prompted parties, pollsters should include ‘attitude scales’—either ranging from ‘weak’ to ‘strong’, or expressed as a numerical scale—to allow people to qualify their stated support: ‘weak Liberal Democrat’, ‘strong UKIP’, ‘ out of 10 Green’, and so on. While this would not rule out a recurrence of the famous ‘shy Tory’ factor, it would allow pollsters to identify more accurately the number of ‘switherers’ whose support for their parties might not be entrenched enough to prevent them switching their vote in the last instance.
A further addition would make pollsters’ analysis of voters’ conviction even more sophisticated, covering the ‘degree’ to which they back particular parties. Apart from a few die-hard supporters, most people are open to voting for more than one party, depending on their circumstances. Yet the closest the polling industry has come to modelling this is by adding questions about how respondents voted at the previous election—as with the apparent dependence, well documented by Politicalbetting.com, of projected Labour support at this election on ‘2010 Liberal Democrats’. As well as their partisan attitude scales, voting intention surveys should also include some measurement of ‘additional party preference’: either ordinal, like a poll equivalent of AV—‘1st choice Green, 2nd choice SNP, 3rd choice Labour’—or as a percentage distribution—‘70% Conservative, 30% UKIP’. This would allow pollsters to work out not just how many ‘switherers’ there are, but also where these voters might switch to—information which party strategists, at the very least, would surely be grateful for.
Ultimately, it remains to be seen quite what recommendations the Polling Council’s enquiry will produce. At the very least, however, the 2015 election has been a warning shot for the polling industry, which now knows that it needs (yet again) to up its game. The position of opinion polls as a staple feature of our democracy is far too precious to be wasted by methodological corner-cutting, or to be lightly abandoned in the face of political hostility. Their role of clarifying to society its own collective views about the pressing affairs of the day is not easy to fulfil. They have doubtless suffered a setback for now—but their situation is far more hopeful than the current narrative about them would have us believe.