At first glance, the news from Chicago’s attempt to use big data to cut crime looks fairly standard reality-check fare. Not every clever thing works every time and not all the world is a Malcolm Gladwell anecdote.
Chicago’s Experiment in Predictive Policing Isn’t Working…
A new report put together by Jessica Saunders and colleagues at the RAND Corporation examines how the Chicago Police Department implemented a predictive policing pilot project in 2013 and 2014. The city used a computer model to examine data on people with arrest records and come up with a list of a few hundred individuals deemed at elevated risk of being shot (or committing a shooting—the two groups have a striking amount of overlap). The idea was that police would be able to use the list to reach out to people and try to help them out of high-risk situations. [MIT Technology Review]
But there’s a twist, and it’s in the main reason for the project’s failure so far.
What has failed isn’t directly the data, analysis or technology. Rather the problem is how the humans fit in:
The researchers found that in over two-thirds of cases, police throughout the city simply ignored the list.
The observations and interview respondents indicate there was no practical direction about what to do with individuals on the [list], little executive or administrative attention paid to the pilot, and little to no follow-up with district commanders.
In other words, a technology project to target people was rolled out without proper training of those who would need to take part in it, with little tracking to see how people were doing with using the technology and with little follow-up to those in control on the ground.That all sounds rather like the messy reality behind many political technology projects too. Because behind the headline-grabbing myths over how political parties target you based on your yoghurt purchases, lies a much messier picture of humans sometimes using the tech, sometimes following the script, sometimes going where they are asked, sometimes using the literature they are suggested. And often not.
As I wrote when reviewing the excellent political data reality check that is Ground Wars:
Often Nielsen found that what campaigners actually did on the doorstep varied greatly from the scripts the campaign HQ had given them. They might leave HQ with scripts in hand carefully telling them what to say to voters, but actual encounters with voters rarely followed the script – even if they started with it, which was by no means certain. So far, so familiar to campaigners.
But what Nielsen then adds is a useful perspective on how this issue of what frontline staff or volunteers say and how people at ‘the centre’ can influence it is a common problem across many different organisations, from those trying to get the most out of their sales staff to coffee chains trying to ensure staff are friendly to all customers. There is much that political campaigns can learn from the different approaches that others take to similar problems, such as thinking more about training on general skills (e.g. how best to handle an angry person) rather than trying to micromanage the words to be used. Ground Wars also includes some fascinating vignettes from canvassers bemoaning the lack of training and communication about what they are doing and why compared to their experiences working in the commercial sector, such as for fast food firms.
Technology isn’t just about code. It’s about humans too.