Last updated August 2015.
Paddy Ashdown’s famous election night promise to eat his hat if the 2015 general election exit poll was right was based on a genuine belief, soon shown to be horribly wrong, that the constituency polling and grassroots data gathered by the Liberal Democrats meant the party would do far better than the exit poll’s prediction of a mere 10 seats. As it turned out, the Liberal Democrats actually did even worse than 10, coming with 8 seats.
Why was the party’s prognosis of the situation in the seats so far off; what went wrong with the party’s polling and other sources of intelligence?
The Lib Dem national pollingOne problem with the Liberal Democrat polling, only indirectly linked to the Ashdown Hat Problem but a background effect and an important entry in the list of lessons to learn, was the party’s internal national polling. It overwhelmingly concentrated on policies – testing them out to see if people who might vote Liberal Democrat liked them and how important the issue areas were to them. According to that polling, the Lib Dems had plenty of policies which were very popular with the people it needed to vote for the party in its Westminster target seats.
The experience of the election campaign is that this polling was accurate – the popular policies it threw up held up well during the campaign with Lib Dem leaners – but that it was mostly testing the wrong thing.
That is because the Liberal Democrat problem was a valence one, not a policy one. As I explained in Liberal Democrat Newswire #65:
Political scientists crunching the evidence over how people decide who to vote for (such as in Affluence, Austerity and Electoral Change in Britain) find that policy issues matter much less than ‘valence’ issues.
That is, people don’t decide who to vote for based on looking at policies and seeing how closely a party or candidate’s policies match up to their own preferences. Rather, they lean on decisions over perceived competence on issues where different parties all have the same shared objective. For example, voting Conservative because you think they’ll be best at creating new jobs is a valence choice. All parties want more jobs, so picking the Conservatives is about perceived competence, not ideology.
Although there certainly are ideological choices and they do have an influence, it’s valence that dominates in British elections. Hence the problem for the Liberal Democrats in the general election wasn’t about having controversial policies which people didn’t like. There wasn’t even a small echo of the problems with the immigration amnesty policy of 2010 for example (good policy but burdened with the fatal combination of being both controversial and not amenable to a one-sentence defence). Asked where they put the Lib Dems and themselves on the political spectrum, voters kept on putting the party near to themselves overall.
Rather the problems were valence ones – about competence and trust in particular.
Polling policies and then discussing what to do with the results of the polling gave the impression of a data-led, professional approach to campaigning. The problem was it led off in the wrong direction – debating details of policy selection and presentation rather than debating those valence issues which moved votes.
As a result, for example, the summer 2014 campaign of rolling out lots of individually popular policies went well in terms of media coverage secured – but failed to move the polls.
Having been involved in party polling during previous internally contentious periods, I’d caution against simply criticising the staff involved for this. When there’s a risk that polling can be seen as giving results that are for/against the party leader personally, there can be a strong pressure, both implicit and explicit, to point the polling questions elsewhere.
Which is why getting the lesson right is about not only focusing Lib Dem research resources in a more balanced way in this Parliament but also getting the structures around making such decisions right so that internal pressures don’t push it off course during tough times.
The Lib Dem constituency polling
The constituency results were well off what the party’s constituency polls told it to expect. What’s more, as in 2010, the party ended up doing things on polling day which, with hindsight, were tragically misplaced. The poster boy for this in 2010 was running an intensive polling day operation in Oxford East, which was lost by 4,581, whilst neighbouring Oxford West and Abingdon didn’t get that help and lost by just 176. In 2015, amongst the places the party’s central London volunteer phone bank was knocking up on the eve of poll was Maidstone – lost by 10,709. (Not all the phoning was misdirected as polling day phoning included Sutton & Cheam and Eastleigh.)
So those polls were done all wrong, right? Well, only if you ignore the more nuanced evidence which is to be gleaned from other polls, both right and wrong.
First, the Lord Ashcroft opinion polls were also very badly off in the case of Liberal Democrat MPs, with people like Stephen Lloyd losing despite being shown to be ahead.
As with the Lib Dem constituency polls, Ashcroft’s polls were conducted well ahead of polling day so the polls may have been right at the time of fieldwork, especially as some, but not all, pollsters think there was a late and large swing to the Conservatives.
That both Lib Dems and Ashcroft were wrong doesn’t mean the Lib Dem polls were done without fault, but it does suggest that some of the criticisms made at the time of them about their methodology were off because Ashcroft’s polls, with their very different methodology, were also off.
Second, the past voting weightings the party used in some of the published constituency polls attracted queries, yet the party actually tried out different past vote weightings – and consistently the polls were out.
What’s more, the specific methodology criticisms don’t really stack up with the evidence we have of national and constituency polls that (we are told) were right.
Survation found that prompting for candidate names worked best of its different methodologies, which suggests that the Lib Dem preference for this over Ashcroft’s insistence on not naming candidates (though in his by-election polls he does) was not obviously wrong. Especially as the Tory internal polls, which we’re told were pretty accurate, also relied on naming candidates (although their polls may not have been quite as clear at the time as we’re now being told, given that George Osborne didn’t believe the Tories were going to win an overall majority, offering to kiss Lynton Crosby if they did). Speaking after the election, the Conservative pollster Mark Textor said:
Published polls kept showing we were losing seats that our polling showed we weren’t, because when you don’t measure the incumbency factor through a local candidate name in seats where you are spending enormous time and effort and money upping the name identification of a local member then that’s a big mistake.
As for question order, the final wave of Liberal Democrat constituency polls asked about the merits of individuals before their voting intention question, something which mirrors the approach that the highly respected pollster Gallup did for years with its national polls in the UK. Oddly none of the critics of this approach I read mentioned that Gallup did it too, which suggests their willingness to consign this as dodgy, biased, baby-eating and the like was based on rather thin knowledge of what pollsters actually do.
Moreover, Labour’s internal polls which, from the published accounts, were much more accurate than the public polls (mostly) relied on asking a range of questions ahead of the voting intention question, again suggesting that leading with voting intention isn’t simply the slam dunk right answer to polling purity and excellence.
So were the Lib Dem constituency polls wrong? Yes.
Was this due to naming candidates, asking about the merits of candidates first or past voting weightings? Very likely no.
Other explanations are needed, one of which I suspect will be that the Lib Dem polls accurately caught how people would have voted had the election result been clear in advance (as with 1997, 2001 and 2005) and had voters marked their ballot papers thinking about who they wanted as their MP rather than who they wanted as PM.
The three elections in which the Liberal Democrats significantly underperformed compared with internal expectations – 1992, 2010 and 2015 – were all ones where the result was unclear in advance and so there was a greater pressure on voters to think about Prime Minister rather than MP when casting their vote.
There’s more on this in The other thing that went wrong with Liberal Democrat polling.
The other data errors
However, it wasn’t just the constituency polls that misled Liberal Democrats over the party’s prospects. So too did the other intelligence being gathered about the situation in local seats.
This is an area where all parties are less willing to talk publicly about exactly what they do, but it’s not giving away any secrets to point out that this was not only the first general election in which the Liberal Democrats were using the Connect database, but it was also the first general election in which the party was using a new system of canvassing classification. On top of that, it was also the first general election in which the party was using a new approach to selecting which voters to canvass – and so using a different frame of reference within which to understand the figures which in turn were based on that new classification system.
The possibility of failing to read the figures right when you’re using a doubly new system for the first time in a general election is something the party’s post-mortems should definitely study.
Especially as a more old-fashioned approach to looking at the figures which I used seemed to work better than that others in the party were officially using. That is of course easy to say in hindsight and it is based on a relatively narrow set of evidence, so it is not a good basis me to claim any special wisdom but rather a good indication that this is an area to study further.
The other question is about the KPIs (key performance indicators) which the party tracked. These were heavily based on what is easy to measure apparently precisely – just the sort of enticing trap which so often leads KPIs in the public sector astray, and in the private sector too for that matter. They were also KPIs heavily based on volume of activity. High volume low quality activity which was not having an impact on voters could too easily result in strong KPI performances.
The party was quite right in wanting to have a rigorous KPI framework within which to decide how to allocate scarce campaigning resources. Moreover, in the heyday of key seat campaigning in the decade from the mid-90s, these sort of measures worked. However, they no longer do.
As with the canvassing data questions, there is however a serious question now to study about whether the KPI data was really telling people something useful about the party’s chances in constituencies or whether its apparent rigour and precision ended up misleading because it didn’t really capture enough of what really matters.
Both these questions are ones more to explore in private by the party, but explore them the party must along with the lessons from the polling.
Changing nothing would be a mistake. It would equally be a mistake to condemn everything as the lessons to be learnt are to be found in the subtle mix of what did and didn’t work not just for the Liberal Democrats but also for others.