There has been a lot of excitement this week about an unadjusted opinion poll by Ipsos Mori which shows Labour ahead of the Conservatives by 43 points to 40.1 Given that the Conservatives have been leading the polls since the election was called, with anything up to a 20+ point margin, this has understandably raised the hopes of those wishing for Comrade Corbyn to become Prime Minister on 9th June.
The important adjective is in the title though: unadjusted.
The fundamental problem that plagues opinion polls is that they ask people what they are going to do in the future, which by its very nature is uncertain. In terms of elections, there are two key questions:
- Who will you vote for (voting intention)?
- How likely are you to vote (turnout)?
For voting intention there are well-known problems, from people not wanting to admit to voting Conservative (the ‘Shy Tory Effect’ which was blamed for the 1992 general election polls failing to predict a win for John Major) to people changing their minds at the last minute. For example, in the Scottish independence referendum there were some concerns amongst unionists when a poll showed a lead of seven percentage points in favour of independence, but these fears were allayed to an extent by the likelihood of some people reverting back to the status quo on polling day.2
In terms of turnout, again there are issues with the data. For example, do you really want to admit that you have no intention of voting, which to some people might be considered as embarrassing as voting Conservative? This is a particular problem with younger voters, who may respond to a survey saying they’re fired up by Jeremy Corbyn’s socialist utopia, but then decide on polling day that actually all politicians only care about old people and there’s no point in voting for any of them. This is not just a sweeping statement – historically turnout amongst young people has been far lower than the numbers predicted by raw polling data.
To work around these problems, polling organisations adjust the raw results based on historical data (e.g. stated vs actual turnout) and try to take account of factors should as the Shy Tory Effect. This is both an art and a science, and because there’s no ‘right’ way to adjust polls you will end up with different numbers from different organisations.
Of course, as every financial advertisement is obliged to warn you, past results do not guarantee future returns, but generally speaking an adjusted poll is likely to be more accurate than an unadjusted one.
The takeaway message: If you see a polling result which appears to be an outlier, there are three possibilities:
- The poll is unadjusted. You shouldn’t read much into it.
- The polling organisation has used different assumptions. Unless you know what those assumptions are – and sometimes even if you do – it’s hard to judge whether they are sensible.
- The mood of the electorate has changed significantly. Whilst always a possibility, it’s safer to wait for this to be confirmed by another organisation rather than getting your hopes up (or down).
Or as Barbie would say: maths is hard. Let’s go shopping.
The final count had ‘no’ to independence winning by around ten percentage points. ↩