Media & PR

Media has a fair share of the blame for fake news about ITV election debate

Twitter login screen on a smartphone - CC0 Public Domain

Following the Boris Johnson – Jeremy Corbyn TV election debate, YouGov carried out a super-quick poll of people who watched the debate to judge the winner. The verdict? The public fell one 1% short of a top-bants answer by splitting 51%-49% in favour of Johnson.

There was though also an outbreak of conspiracy theories online about how there were many polls that showed Corbyn winning by a landslide and which the media showed their bias by ignoring. Added to this were claims that all those other results were much better than the YouGov one as they involved far more people responding.

One slight contributor to this was the fairly standard decision by YouGov to ready a story on its website ahead of time that was then updated with the poll results after the election. This is a normal publishing approach, including one I follow on this site. However, it meant that the published timestamp at the top of the story was from before the debate ended.* Cue conspiracy theories about how the poll was faked or faulty because it came out before the debate ended. (Of course the conspiracy theory has to skip over the point that someone really faking a poll would remove the timestamp published at the top of a story or otherwise would fake it…)

But the heart of the conspiracy theories has been a set of Twitter polls with up to many tens of thousands of people taking part in them, showing Corbyn winning.

The media was right to ignore these as Twitter polls are not a good guide to public opinion or, in the case of the TV debate, the seven million or so who watched the debate.

Three reasons are key for understanding this:

  1. Twitter users are not typical of the TV-debate watching audience or the overall electorate. In particular, they skew very significantly to be more Labour-leaning. (An example of the research which shows this is here and another is here.)
  2. Even if Twitter users were typical, then self-selecting surveys** are not a good guide to overall results compared with opinion polls. For example, if I surveyed people lined up to go into a football match between Chelsea and Arsenal and asked their favourite football club, that wouldn’t tell me what football fans across the country think. It’d be a biased sample that gives a biased result,
  3. As a result of this, larger sample sizes don’t somehow make polls much better. They add a bit of extra potential accuracy and allow sub-samples to be looked at, but for overall questions of ‘what does the public think’ samples sizes from 1,000 upwards are plenty. A national poll with a sample of 1,000 is only inferior to one with a sample of 50,000 when you get to wanting to look at what, say, older people in East Anglia thought about the questions.

That final point – about how little extra accuracy much larger samples gives – is both simple and obvious if you follow the maths, but also counter-intuitive to anyone who hasn’t come across the point before. (If you’re in the latter camp, here’s a good introduction to what such apparently small samples are still big enough.)

So the media got it right to ignore those Twitter surveys and instead report the YouGov poll. In that, the media was better than many others.

Such as, ahem, the willingness of some professors to go full-on into promoting such conspiracy theories…

To their credit, many other professors have responded with versions of ‘OMG, no!’.

Except this isn’t a simple story of the media sifting evidence and then running with the credible story because… who ran several of those Twitter surveys? The answer is one which isn’t free of the journalists and the media itself.

Twitter surveys can be fun, sure. They can even sometimes be useful if you know that the skew in the people answering it doesn’t matter for what you’re trying to find out.

But given how widely they are misunderstood as being proper forms of insight into what the public thinks, perhaps those in the media should think again about creating more fuel for such misunderstanding in the future.

Adding caveats about your own surveys isn’t enough. It’s easy to see that doing Twitter surveys on political topics triggers widespread misunderstanding and fake news. Which means the responsible step isn’t to do one and throw in caveats in separate messages from the survey results. It’s not to do them in the first place.

 

* There pieces of evidence that this prosaic version of events is true: (a) I checked the YouGov site before and after the end of the debate and so saw how it was updated myself, (b) the relevant person from YouGov has tweeted explaining what they did, and (c) several participants in the poll have tweeted about how it was run after the debate ended.

** Hence the caveats on my own surveys of Liberal Democrat party members, including trying to remember always to all them surveys rather than opinion polls, talking about how the biases in the results may skew them, and benchmarking them against other evidence wherever possible.

Leave a Reply

Your email address will not be published. Required fields are marked *

All comments and data you submit with them will be handled in line with the privacy and moderation policies.