The victorious candidate was the best candidate, their campaign manager the best campaign manager and their campaign tactics the best tactics. The loser was worse, had an inferior team, with a poor plan and bad tactics.
This sort of logic pervades analyses of political campaigning, even when the margin of victory is razor thin. The perceived genius of Karl Rove and perceived incompetence of Al Gore rested on a wafer thin margin in Florida in 2000. A tiny movement of votes or legal opinion, and the post-2000 verdicts would have been very different. Al Gore would not have been frustratingly plodding, he would have been admirably stoical. Karl Rove would not have been ruthlessly effective, he would have been an eccentric extremist.
It is only rarely that the loser who actually got many things right, or the winner who really rather messed up, manages to break through the alluring simplicity of ‘they won so they’re the best’.
The question of whether the winner was really that good, or the loser really that bad, rests at the heart of Sasha Issenberg’s The Victory Lab: The Secret Science of Winning Campaigns. It is unasked by the author, but should be repeatedly asked by the thoughtful reader for one of the book’s main themes is the way that randomised controlled trials, of the sort common in scientific research and increasingly so in marketing too, are spreading to politics. Yet the political poster boy for testing tactics with randomised controlled trials was also a disastrous loser when it came to his big political chance.
Issenberg provides a very entertaining and well-researched history of how political scientists and political campaigners have splutteringly moved towards much more rigorous research into what works in political campaigns. Aside from one or two pioneering efforts early in the 20th century, this is really a tale of the late 20th century onwards, with campaigns splitting the electorate into different test groups and then seeing what difference it makes to try out different campaign tactics on them. Does sending one group of voters a letter encouraging them to vote have more or less of an impact than giving a different group of people a phone call, for example?
The appeal of such testing is that it puts the tools of political campaigning on a much firmer base than the reliance on gut feel of experience campaigners. It is also, of course, an approach already well established as being useful in working out what works in other closely related disciplines, such as marketing and digital campaigning.
The risk, however, is that the factors you are trying to isolate are of little value in a complicated world of numerous actions and diverse influences. Does that one letter versus one phone call test really say something useful for the real world of campaigning where people are on the receiving end of multiple other forms of communication, let alone factoring in the other ways their votes are influenced too?
What is more, do you therefore end up concentrating on how to optimise different tactics that, even when added together, are only a tiny factor in deciding an election, and as a result spend time and effort looking in the wrong place for the real secrets to electoral success?
That risk is personified in The Victory Lab by the aforementioned poster boy for randomised testing in politics, the American Republican Rick Perry. For a few years in the first decade of this century he was the icon for evidence-based campaigning, assembling a star-studded cast of political science advisors, letting them loose testing, generating evidence and applying the lessons.
Then came his disastrous bid to be the Republican Presidential nominee for the 2012 contest, leaving him looking a risible figure after he failed in a TV debate to recall the name of one of the three federal government agencies he had pledged to axe.
For all the interesting and smart evidence-based approach to political campaigning displayed by his campaign, it was myopic. Perry’s political career would have been better served by a wider perspective that remembered how often candidates get made or unmade by striking debate performances or speeches. Fiddling around with whether to make a third phone call to a subset of voters would have been better replaced by more of a focus on how to give brilliant speeches and how to shine in debates. (A parallel with marketing in the commercial sector is that fine tuning your social media adverts yet neglecting basic product testing may result in some very smart lessons being learnt about advertising, followed by disaster as the product failed because it is duff.)
Someone who certainly had the edge over Perry in speech giving, and – save for on his bad days – in debates was Barack Obama. His campaigns too eulogised testing, data and smart thinking about tactics. As a result, his 2008 campaign gets a heavy mention in Issenberg’s book (with 2012 much less so, giving the timing of the book’s writing and publication).
There are fascinating details of how seriously the Obama campaign took testing, such as the way for 2008 it initially split its paid phone call contact program between 10 vendors and then rigorously checked to see how they were performing. Questions asking for the age of voters were cross-checked against official records to see which companies were really recording accurate information. Moreover, lists of names to call included campaign team members who then were surveyed on how well the calls were conducted. That list of 10 vendors as a result was whittled down to five by these competitive face-offs.
What is much less commented on, however, is how all that worked out for Obama in the vote totals. The state by state swing in votes between the 2008 and 2012 US Presidential election was nearly uniform across the country.
As Larry Sabato’s analysis found,
The correlation between President Obama’s margin in 2012 and his margin in 2008 across all 50 states and D.C. is .96. In other words, you can closely predict Obama’s margin in 2012 almost perfectly from his margin in 2008; his drop from 2008 to 2012 was fairly uniform…
The biggest outliers are Utah, where Obama did substantially worse than expected in 2012, and Alaska, where he did substantially better than expected. Mitt Romney’s Mormonism probably explains why Obama underperformed in Utah, and Sarah Palin’s absence from the national ticket might explain Obama’s uptick in Alaska. [Source: Larry Sabato, Crystal Ball email newsletter]
This uniformity acts as a critique of all styles of campaign – savvy modern data driven and old fashioned mass TV advertising buying the same – because the picture condemns them all. The states that received the most smart campaigning, the most clever uses of data and the biggest TV ad buys moved no differently from those that were ignored.
Isenberg hugely praises the Obama 2008 effort, and understandably:
The 2008 Obama campaign would become, in a sense, the perfect political corporation: a well-funded, data-driven, empirically rigorous institution that drew in unconventional talent ready to question some of the political industry’s standard assumptions and practices and emboldened with new tools to challenge them.
If anything, the Obama 2012 campaign was even better. Yet the 2008 and 2012 campaigns did not have identical targeting plans. Therefore, if all their efforts were making a difference, we should expect to see variations that demonstrate it. But remember that uniform swing. What’s more, when broken down in detail, the numbers behind the Obama 2012 grassroots effort look rather small scale compared to what’s required to win.
Again, the lesson is: don’t be taken in by the exciting seem precision and novelty of specific tactics. There is much more at work.
What the Obama campaign also epitomised, and what Issenberg covers well, is the switch from targeting by geography to targeting by person, often given the name microtargeting.
Previously, geographic targeting, assigning scores to areas for their likelihood to vote and for which party, reached high levels of sophistication by the 1980s in the US. Then, the availability of more data in electronic formats, the accumulations of numerous – often consumer-based – new data sets about people, faster computers and a greater need to wrinkle out votes from areas previously written off all combined to make it possible and enticing to model which individuals were the best prospects for a campaign. No longer, for example, did all the voters in an area have to be written-off because it looked to be 80% Republican (the geographic modelling approach). Instead, individual modelling could be used to try to tease out those 20% worth paying attention to.
There is a huge competitive advantage in having the best modelling approach and so all the campaigns and their suppliers are rather coy about exactly what they did. What is however clear is that they had a range of sources of data about people:
- public data about neighbourhoods, such as from the census;
- public data about individuals, such as lists of bankruptcies;
- purchased data about individuals from private consumer data warehouses, such as who owned particular models of cars (although Issenberg points out that some of the most headline-catching examples of such modelling were actually not that useful to political parties; likewise in Britain the media stories about political targeting by yoghurt purchasing preferences has not led to canvassers staking out the chilled cabinets in supermarkets);
- publicly shared data from individuals, especially information they voluntarily share through social media; and finally
- data directly gathered from individuals by the political campaign, such as by getting someone to fill in a survey online or talk to a volunteer when door knocked.
The modelling of the likely behaviour of individuals, in this respect, is already very familiar in the mass consumer marketing world. Who shops at supermarket X and therefore which other people living in the area are the most likely new customers to win over? Why hasn’t person Y come back to shop there for 5 weeks? The political equivalents of those sorts of questions are now, as Issenberg documents for the US, becoming a common part of campaigning.
In total, campaign expenditure on the 2012 US political campaigns broke the $2 billion mark for the first time. That may sound a large number, but still comes out at less than $9 per American adult and is equivalent to only about 2% of the total US advertising market in 2012 (even though the political figure includes many non-advertising costs).
Though minnows when it comes to money compared to the commercial sector, political campaigns do lead the way at recruiting immensely talented people working fanatically hard for a concentrated period on very concrete goals. As a result, there is a common path from the political to the commercial marketing worlds of bright people, taking skills and tools honed on campaigns to the bigger budgets and year round work of the commercial world. The 2012 campaign is already looking to be no different from that, with in particular many of the online campaigning tools spreading out and down to more diverse and smaller-scale operations.
They are also spreading to the UK and British politics too, especially as the Liberal Democrats use the same core database package – produced by NGPVAN – as the Obama 2012 campaign, whilst the Labour Party have signed up the same e-campaigning firm as used by Obama, Blue State Digital.
The level of access Issenberg has acquired to the insides of US campaigns is not flaunted in the book, but revealed in telling details such as about the tensions between the Obama campaign and Blue State Digital over data being stuck in silos in the latter’s systems. That runs counter to the common public story and makes the book much more than simply an edited highlights of the public record.
Instead, the book accounts how campaigns are increasingly putting people at the centre of the marketing techniques – even if frequently only as numbers in an algorithm. The skills and research techniques documented in The Victory Lab are clearly very good for securing value for money from tactical campaign spending decisions but there is still the big question left: how much does such tactical finesse matter in determining the result of an election? Are these pieces of tactical wizardry the route to political campaign success, or do they distract you into a cul-de-sac, where you are left trying to squeeze the last bit of optimisation out of a tactic whilst the election is being won or lost on bigger territory elsewhere?
That wider question is implied too in the book’s epilogue, which accounts how attention amongst some US political scientists and campaigns (especially Obama 2012) has now started to switch towards questions of voter psychology, trying to understand both what makes voters tick and therefore how best to nudge them in the right direction. They are starting from the idea of how to make a voter change their behaviour rather than how to raise the cost-efficiency of a campaign tactic.
That is where the future lies.
An earlier version of this piece appeared in the Journal of Direct, Data and Digital Marketing Practice, Volume 14 Number 4, 2013.