The pollsters got our 2010 general election wrong, over-estimating the levels of support for the Liberal Democrats in their final polls. Only the exit poll got it right. The level of error varied between pollsters; the consistency of the error, however, means this wasn’t just one pollster having a bad day with a duff sample or similar.
Quite what the reason was for this is a mystery. Or, more accurately, none of the explanations offered so far stand up to close examination of the evidence.
For example, it sounds plausible that the Liberal Democrat support was disproportionately dependent on younger people saying “Lib Dem” to pollsters and many of them ending up not voting. Plausible – until you look at the evidence, at which point it fails as an explanation. Not only were the pollsters already adjusting for likely turnout levels; in addition there isn’t evidence to suggest that there was a large-scale disproportionate turnout effect beyond that.
However, the recent debates over variations in American Presidential polls throws up a possible different explanation.
Here’s some of the US discussion, from the expert Nate Silver:
Even the best surveys these days only manage to get about 10 percent of people on the phone, while the shoddy ones might struggle to get 3 or 5 percent of voters to return their calls. These percentages have fallen precipitously over the past two decades.
Polling firms are hoping that the 10 percent of people that they do reach are representative of the 90 percent that they don’t, but who will nevertheless vote. But there are no guarantees of this, and it is really something of a leap of faith. The willingness to respond to surveys may depend in part on the enthusiasm that voters have about the election on any given day.
In other words, if a candidate is seen as doing well, then their supporters may become disproportionately likely to respond to opinion polls, thereby exaggerating their level of support. (This is different from how likely they are to turn out to vote, as so adjustments for turnout won’t correct for it.) Similarly, a candidate dropping in support may see his or her supporters become less keen to respond to a survey, again exaggerating the actual shift. As an explanation of the big swings in Gallup’s Presidential surveys, this idea seems to have some mileage.
The logic could also apply to the UK. Perhaps the reason the final pre-polling day polls exaggerated Lib Dem support is that Lib Dem supporters were disproportionately willing to respond to surveys, being fired up by the chance of a dramatic result. The accurate exit poll, by only polling people who had voted, would not have had this problem.
Sounds plausible. Now, all we need is some checking of the evidence…