During the general election, I commented on how wise the BBC’s editorial approach to opinion polls looked. Forged in the aftermath of the polling debacle of 1992, those guidelines include:
When reporting the findings of opinion polls (especially voting intention polls in the United Kingdom), whether commissioned by the BBC or others:
- We should not lead a news bulletin or programme simply with the results of an opinion poll
- We should not headline the results of an opinion poll unless it has prompted a story which itself deserves a headline and reference to the poll’s findings is necessary to make sense of it
- We should normally report the findings of opinion polls in the context of trend and must always do so when reporting voting intention polls. The trend may consist of the results of all major polls over a period or may be limited to the change in a single pollster’s findings. Poll results which defy trends without convincing explanation should be treated with particular care
- We should not use language which gives greater credibility to the polls than they deserve. For example, we can say polls “suggest” and “indicate”, but never “prove” or “show”
As I commented at the time, the wealth of polls and the frequent amnesia in the media over how to report a poll sensibly made the BBC guidelines look rather wise. It made their reporting less error-prone and no less interesting (if you define interesting by being vaguely connected to the facts, but if you don’t then just grab a novel instead of a news story).
That was true then, when both the Sunday Times and The Guardian had been caught out with front page splashes which even at the time, without hindsight, seemed dodgy and now, with hindsight, should be a basic part of any ‘how not to do it’ training course.For the Sunday Times, the problem was splashing on just one poll, which was then within hours followed by several others that told a different story. For The Guardian, the problem was splashing on three polls – but on an evening when five polls came out, and the other two told the opposite story of the three. The banner front page graphic gave no clue that it was screaming for the reader’s attention based on only 60% of the story – and that the other 40% of the story told the opposite.
In fact, for The Guardian things were worse for twice mistakes were made – reporting only a selection of polls and reporting figures based on a tiny sub-sample without making its minuteness clear – and yet later in the campaign those mistakes were not repeated. Why does that makes things worse? Because it means mistakes were made when they gave the story more pro-Labour slants and then figures duly caveated instead when they looked less good for Labour.
That may well have been an unfortunate coincidence – the mistakes happened to be pro-Labour by chance and were learnt from – or a case of subconscious bias – it’s much easier to make mistakes (as I know!) when the data points towards what you want. But either way it’s not a happy pattern.
But credit is most certainly due for The Guardian now to have adopted a much wiser editorial line, retaining the baby whilst throwing out the bathwater:
The Guardian has decided to take a pause on reporting polls as political news, but will not rush to discontinue the monthly series of surveys which it has commissioned over the last 30 years. Instead, this series will be maintained in a low-key way: while lessons are learned, methodologies are refined and – we hope – trust is restored.