A quick explanation for people who are new to pouring over the details of polls and, as several people have commented, are confused by the conflicting figures given for whether a party is up or down and if so by how much in a poll.
Different polling companies use different methods, so comparing – say – an ICM poll with a previous BPIX poll isn’t comparing like with like. Therefore when looking at a poll it makes sense to calculate up/down figures based on the previous poll by that polling firm.
However, polling firms often do work for more than one media outlet and media outlets (still) prefer to compare poll results with the previous poll that they commissioned and not with the previous poll from that firm.
For example, suppose the Lib Dems get 28% in an ICM poll for The Guardian, then 26% in an ICM for the Sunday Telegraph and then 27% in another Guardian ICM poll. The Guardian will report that third poll as showing the Lib Dems falling by one point (28 to 27), whilst most commentators – such as Mike Smithson and Anthony Wells – will, more sensibly report it as +1 (26 to 27).
The Guardian is in fact one of the best news outlets which it comes to revealing the existence of other polls by its pollster that were published in rival newspapers. It often will give both figures, though use the “Guardian only” comparison in its main graphic. Nearly all the other media outlets don’t even go that far and simply ignore those other polls.
That’s why therefore you sometimes get the same poll reported in different places and in different tweets with varying +/- figures. (If you are a regular on Twitter, watch out for Tweetminster who as far as I’ve seen normally give the media style figures – ignoring any intervening rival polls – as they tweet the ‘official’ media figures.)
Just for some extra complication there are occasionally two polls from the same firm with overlapping fieldwork dates, raising a question about which of those was the “first” poll; in addition, sometimes pollsters vary from their usual methodology – raising the question of whether that change is big enough to mean a like for like comparison should ignore the poll with the different methodology. Those are very much the rare cases however; nearly all the variation is explained by the main point above.
All in all, a good reason to follow all the polls rather than just a select minority if you really want to know what is going one.