Media & PR

What’s so special about political opinion polling?

Earlier this week I went along to a session organised by the Economic & Social Research Council into how political polling performed at the 2010 general election. The openness with which different political pollsters discussed their approaches, the problems they encounter and how the different techniques used by competing firms compare was a credit to the British polling industry.

But it also left me with two nagging doubts.

One was widely shared by those at the event. No-one so far has really come up with an explanation as to why the polls just before polling day over-estimated the Liberal Democrat share of the vote so significantly. Explanations such as late swing or differential turnout are not supported by the evidence. We may yet get an explanation as more data from the British Elections Study emerges, but for the moment the industry’s answers are variations on ‘we don’t know’.

The other nagging doubt is about non-political polling in the UK. Political polling, at least just before a general election, has a public yardstick of the actual result to be judged against. Because of that yardstick, we know that whether it is face-to-face, phone or internet polling, the samples you get are highly skewed. We also know that adjusting for a full range of demographic and lifestyle variables (e.g. age, gender, number of foreign holidays and newspaper readership) still leaves you with a sample that doesn’t reflect the electorate overall. It is only when you add in some extra adjustments that you start to get polls which reflect the actual results. The secret sauce added to the figures at this final stage varies greatly between the different pollsters, with the merits of different approaches being much debated. Even MORI, traditionally the pollster most disdainful of magic sauce, adds in some further adjustments, e.g. for whether or not someone works in the public sector and also for how likely someone says they are to vote (despite our own predictions of how likely it is that we are going to vote being pretty poor – but, with the public yardstick to judge against, we can see how this still makes the results more accurate).

But what about non-political polling? We don’t have similar public yardsticks to a general election result to judge against and equivalents to those extra adjustments are usually not made for non-political polling. But if we know that without those extra adjustments the sample isn’t properly representative of the political views of the public, why should we think it is properly representative of the public in other ways, such as the public’s liking of particular brands, attitudes towards products or views of different corporations?

Or in other words, given that we know political polling goes wrong without extra adjustments beyond the basic demographic and lifestyle ones, why should we trust polling on other topics to be right without any extra adjustments?

2 responses to “What’s so special about political opinion polling?”

  1. I’d wondered that too and in fact asked the question at the event. But from other data presented (e.g. call backs after the election on people who had said they would vote Lib Dem to see what they said they’d done), that doesn’t look to be the case. There wasn’t a clump of people who had polled as Lib Dem saying afterwards they didn’t vote because it turned out they weren’t on the register.

Leave a Reply

Your email address will not be published. Required fields are marked *

All comments and data you submit with them will be handled in line with the privacy and moderation policies.