Nope, that isn't the reason - or at least, as far as we can tell it isn't. We checked response rates for panellists before and after the first debate to see if there was a surge in response rates for people with Lib Dem party identification after the beginning of "Cleggmania". It did indeed increase... but so did response rates for people with Conservative ID, Labour ID and no party ID, all to just the same extent. Of course an increase in response rates of people with LD party ID wouldn't necessarily have increased Lib Dem support in the polls anyway as we weight by that, but it was the best measure we had, and hopefully it would have been indicative of a wider increase in response rates from LD supporters. I think it there had been a disproportionate increase in response rates from Lib Dem supporters it would have been reflected in a disproportionate increase in response rates from Lib Dem identifiers too. The possibility remains that it produced a surge in response rates for people who would vote Lib Dem but did NOT normally identify with them, but that response rates for Lib Dem identifiers remained in line with national trends... but that seems somewhat forced.
The pollsters got our 2010 general election wrong, over-estimating the levels of support for the Liberal Democrats in their final polls. Only the exit poll got it right. The level of error varied between pollsters; the consistency of the error however means this wasn’t just one pollster having a bad day with a duff sample or similar.
Quite what the reason was for this is a mystery. Or, more accurately, none of the explanations offered so far stand up to close examination of the evidence. For example, it sounds plausible that the Liberal Democrat support was disproportionately dependent on younger people saying “Lib Dem” to pollsters and many of them ending up not voting. Plausible – until you look at the evidence, at which point it fails as an explanation. Not only were the pollsters already adjusting for likely turnout levels; in addition there isn’t evidence to suggest that there was a large-scale disproportionate turnout effect beyond that.
However, the recent debates over variations in American Presidential polls throws up a possible different explanation. Here’s some of the US discussion, from the expert Nate Silver:
Even the best surveys these days only manage to get about 10 percent of people on the phone, while the shoddy ones might struggle to get 3 or 5 percent of voters to return their calls. These percentages have fallen precipitously over the past two decades.
Polling firms are hoping that the 10 percent of people that they do reach are representative of the 90 percent that they don’t, but who will nevertheless vote. But there are no guarantees of this, and it is really something of a leap of faith. The willingness to respond to surveys may depend in part on the enthusiasm that voters have about the election on any given day.
In other words, if a candidate is seen as doing well, they their supporters may become disproportionately likely to respond to opinion polls, thereby exaggerating their level of support. (This is different from how likely they are to turn out to vote, as so adjustments for turnout won’t correct for it.) Similarly, a candidate dropping in support may see his or her supporters become less keen to respond to survey, again exaggerating the actual shift. As an explanation of the big swings in Gallup’s Presidential surveys, this idea seems to have some mileage.
The logic could also apply to the UK. Perhaps the reason the final pre-polling day polls exaggerated Lib Dem support is that Lib Dem supporters were disproportionately willing to respond to surveys, being fired up by the chance of a dramatic result. The accurate exit poll, by only polling people who had voted, would not have had this problem.
Sounds plausible. Now all we need is some checking of the evidence…