Political

FAQ: Are the Liberal Democrat Voice surveys of party members accurate?

UPDATE: This FAQ is from 2012. For a late 2014 perspective on the Lib Dem Voice surveys see my posting here.

Results from the Lib Dem Voice member surveys often get picked up by the media (including this graphic) and other websites these days. They also often generate lively discussion online, not only on Lib Dem Voice itself. As a result, there are various questions about the accuracy or otherwise of the surveys which often come up. The following FAQ cover the questions I’ve seen asked, and provides answers to them.

If you’ve got any other questions to add to the list (or queries and suggestions on the answers), by all means get in touch or post a comment below. If there are enough to justify it, I’ll produce an updated FAQ.

 

Is it really party members taking part in these surveys?

Yes.

Unlike some of the surveys that take place for other parties, you have to be a party member to take part. The list of participants is checked against the party’s membership records and anyone who is not a current party member is removed. Then everyone on the cleaned list is sent a survey link, which contains a unique code that can only be used to complete the survey once.

So not only are the survey links only sent to members, a member can’t get lots of non-members to complete surveys by forwarding the email on them.

 

Q. There are tens of thousands of party members; a few hundred answering a survey isn’t enough to tell you what they all think.

The British Polling Council has a good FAQ of its own, looking at the similar issue of how a public opinion poll of 1,000-2,000 people can give results for an electoral of tens of millions:

How can you possibly tell what millions of people think by asking just 1,000 or 2,000 respondents? In much the same way that a chef can judge a large vat of soup by tasting just one spoonful. Providing that the soup has been well stirred, so that the spoonful is properly “representative”, one spoonful is sufficient. Polls operate on the same principle: achieving representative samples is broadly akin to stirring the soup. A non-scientific survey is like an unstirred vat of soup. A chef could drink a large amount from the top of the vat, and still obtain a misleading view if some of the ingredients have sunk to the bottom. Just as the trick in checking soup is to stir well, rather than to drink lots, so the essence of a scientific poll is to secure a representative sample, rather than a vast one

For the Lib Dem Voice surveys, the number of responses we get usually means we can say that there is a 95% chance that the true result for any question lies within +/- 4.5% of the figure the survey produces. For example, if a survey result finds 58% of members like to eat chocolate in the morning, we can be 95% sure the true figure is in the range 53.5% – 62.5%.

That is *if* the survey is a representative one.

 

Are Lib Dem Voice surveys really representative of all party members?

That’s a very good question and one there isn’t a simple answer to. However, there are various pieces of evidence that the results are representative and combined they make a strong case.

First, we sometimes ask questions such as the gender, region and length of party membership of respondents. Some of the answers to these questions do come out different from what the party’s Membership Services has told are the true answers are for the party’s membership. For example, we tend to get a higher proportion of men completing the surveys than the actual proportion amongst party members.

However, whenever we have tried weighting the results to bring them into line with the true answers, it makes very little different to the answers for the rest of the questions in the survey. For example, if we have too few female and Scottish respondents, weighting the answers to adjust for that doesn’t even make much of a difference – even to questions about the performance of prominent female or Scottish party figures.

It is possible to think of a question where that might not be the case despite all that experience. Perhaps if there was a question on a topic such as sexual discrimination – but for the sorts of questions we’ve asked, so far there isn’t evidence that there is this problem.

Second, the first few results in from a survey often are different, and much more negative, than later answers. However, this is a small effect as by the time we get to 200 answers or so the results settle down and further responses do not shift the overall pattern much. If we were only get, say, 50 responses, it would be a problem as that smaller, faster to react group would be biasing the results. As we typically get 500 and the numbers settle down long before we get to that total, that isn’t happening and the survey is getting beyond the initial (often unrepresentative) group.

Third, where there has been other polling of party members done via other means, the survey results stack up with very similar answer. See this post in particular, along with this one.

Fourth, we can compare the results of questions asked about party contests and conference votes with the actual outcomes. This isn’t quite a perfect exercise as people’s votes can shift between a survey and actually casting their ballot and also not all members go to conference. However, it is indicative.

  • 2008 Party President contest: a survey gave Ros Scott 74%, Lembit Opik 18% and Chandila Fernando 8%. The actual result was 72% / 22% / 6%. The survey did suggest a higher turnout than actually was the case; that is a common problem with opinion polls too.
  • 2010 Special Party conference on the coalition: a survey showed 91% of party members saying if they had a vote at the special conference they would back the coalition. The actual vote at the conference was uncounted but clearly 95%+.
  • 2010 Party President contest: a survey put Tim Farron ahead of Susan Kramer 56% – 29%; actual result was Farron winning 53% – 47%.

 

The wording of that question is so biased

We design the questions to be as neutral as possible, often using the wording that has been used by reputable polling companies in polls of the general public. Draft questions are circulated amongst members of the Lib Dem Voice team for comment first, and the team contains a spectrum of party opinion (including, for example, both people from the Social Liberal Forum and from Liberal Reform).

As an extra safety net, our questions frequently have a comments box which means respondents can feedback if the options do not fit their views or they think the wording was off, and we regularly publish selections of comments alongside survey results.

 

But, but, but I’m *sure* the answers are wrong!

Almost always variations on this are said are asked when someone gave a different answer to a question from the one which turned out to be most popular with members in the survey. Doubting evidence just because you dislike what it says should always be done with caution as believing evidence that gives you the comforting answer and disbelieving the evidence that gives you answers you don’t like is a very dangerous course.

That said, it’s easy to be too trusting about evidence and accept it without thinking about it; sometimes it is only when the evidence is uncomfortable that we start paying attention to the details. So by all means pour over the details – and then let us know what you find that makes you raise a question.

One response to “FAQ: Are the Liberal Democrat Voice surveys of party members accurate?”

  1. The vote at the 2010 Special Party Conference was more like 99% in favour. I suspect the difference from the survey's 91% was due to some of the members who were dead against the coalition not going to the Conference, and others who were convinced by the arguments made at the Conference.

    Having said that, 91% was a pretty strong level of agreement anyway! You would not get that today – though of course many of us unhappy about how the coalition has worked out still recognise that it was the right thing to do in the circumstances.

Leave a Reply

Your email address will not be published. Required fields are marked *

All comments and data you submit with them will be handled in line with the privacy and moderation policies.