A recent post on the ACLU of Massachusetts’ PrivacySOS blog asks What’s predictive policing? The explanation offers another example of what I consider to be misguided comments about predictive policing. And while some of the remarks are valid, the author’s conclusion would more likely lead to one to assume that he’s for—rather than against—predictive policing.
Quote #1: "The accuracy of predictive policing programs depends on the accuracy of the information they are fed."
That's a valid remark about limitations. The data available are limited to the incidents reported. However, that's no different from the current situation. Police can only respond to reported incidents or to observed incidents. Their current reporting, analysis and decision making is also focused only on reported incidents. If there is a large discrepancy between reported incidents and actual incidents, it’s is not a problem of the analytic approach. Instead, it’s a societal and cultural problem that should be discussed as such and not as an argument against the analytical approach involved.
The author goes on to make a sudden move from discussing reported incidents to talking about arrests, which are of course, completely different events. Police presence is an important inhibitor for crime incidents. The example used in this post illustrates one possible application of predictive analytics in policing that focuses on force deployment. It’s important to note, however, that force deployment is all about smarter deployment of police resources to proactively prevent crimes by being present—taking advantage of information on reported incidents (not arrests). And it’s certainly not a strategy aimed at proactive or preventative prosecution.
Predictive policing does not make use of personal data on individuals. But it does look at data about:
- Crimes (e.g., type of crime, location, time, modus operandi)
- Triggers of crime (e.g., local events, pay days, holidays)
- Enablers of crime (e.g., police presence, weather, status of streetlights, other protection measures)
- Location (e.g., proximity to an ATM, number of empty buildings, average distance between buildings, alarm systems)
It is true, however, that data relating to individuals can be of use in other areas of predictive policing. For example, that data can help construct models to predict repeat victimization, allowing police to better identify, protect and help those who may be vulnerable. It can also be helpful in developing models for predicting repeat offenders. Having better insight into the risks for repeated offenses makes it easier for law enforcement to management help reduce recidivism.
Quote #2: “The predictive policing model is deceptive and problematic because it presumes that data inputs and algorithms are neutral, and therefore that the information the computer spits out will present police officers with objective, discrimination-free leads on where to send officers or deploy other resources. This couldn't be farther from the truth.”
Yes, the author is partly correct—and therefore, partly incorrect, as well. The algorithms are neutral, but they function within a normative environment that determines the way decisions are being made. These norms are set not by the analytical approach but by the societal/political context. On the input side they are influenced by limitations (privacy, legal and technical) and choices (political, types of crimes, geographic divisions, information recorded) made in the collection of data. On the output side there are multiple influences determining how the results of the algorithms are being used in decision making, including:
- How important is crime on the political agenda?
- Which crimes are “high priority crimes”?
- How do we measure police performance? Number of arrests? Number of convictions? Number of “high priority crimes”?
- What’s the budget allocation for police?
- Police strategy
- Is our strategy focused on the mix of focus on suppression, prevention and solution?
- How are we translating the political objectives to organizational goals and measurement?
- How are we organizing departments and allocating budgets to the different branches/activities (patrol, investigative, offender management, traffic, citizen services)?
- How are we going to make our decisions?
- How are we going to work?
- Who are we working with?
- How are we going to tackle specific issues?
- How are we going to decide where to deploy which resources?
- How many resources do we have available?
- How is that going to influence our goals?
None of this is not different from the processes the police currently use. The real difference is this: if a decision is based on data—instead of human interpretation—it’s likely to be used more consistently across all entities, minimizing the effect of prejudices, biases and other human decision making frailties and judgment errors. Using data also makes police decision making more transparent, since the algorithms will only deliver the required results if the environment in which they function has been made as explicit as possible. I usually refer to this as the “decisioning architecture”: the definition and rules governing how algorithms will be used for decision making within the relevant political, strategic, tactical and operational objectives, goals, opportunities and constraints. This decisioning architecture is defined and therefore influenced not by the algorithms or the technology, but by the societal, political and management layers surrounding the application.
Quote #3: “Law officers like to say that predictive policing helps them dodge questions about racism and unequal policing. But data isn't neutral, and neither are the algorithms tasked to sort through and make sense of those pieces of information.”
Predictive Policing will—if it’s placed within the context of a decisioning architecture—advance transparent decision making and enable the eradication of prejudice and bias in police decision making, to whatever extent the political (and societal) environment allows. The police will also be more accountable.
Quote #4: “It's sort of a cultural axiom in the United States that high-powered bankers and lawyers have tastes for expensive cocaine and prostitutes”
The author is exactly doing what (s)he accuses the police of doing. “Cultural axiom” is a euphemism for “prejudice.” Is there any data supporting this, or is it based on urban myths? Or maybe the author fell into the trap of decision making heuristics?
Quote #5: “If an algorithm is only fed unjust arrest data, it will simply repeat the injustice by advising the police to send yet more officers to patrol the black area. In that way, predictive policing creates a feedback loop of injustice.”
The opposite is true. Predictive policing will require that each arrest be adequately justified. Using predictive modeling, each arrest could be evaluated as to whether or not it was just, based on a number of specific criteria. The models would also give police management insight into the drivers of unjust arrests. They could then change their strategy, tactics and policies to try and prevent them.
See for example: