Justin Nix has a good article (academic, but plain English and not behind paywall) about the dangers of using cops-killing-people as a variable. He writes in response to an article by Schwartz and Jahn that maps “police violence” across U.S. metropolitan areas. Schwartz and Jahn find, as have I, that
Rates of police-related fatalities varied dramatically, with the deadliest MSAs (metropolitan statistical areas) exhibiting rates nine times those of the least deadly. Overall rates in Southwestern MSAs were highest, with lower rates in the northern Midwest and Northeast. Yet this pattern was reversed for Black-White inequities, with Northeast and Midwest MSAs exhibiting the highest inequities nationwide.
Not that I’m cited or anything — though granted this blog is not a peer-review publication — but I do believe I was the first to observe and try to raise the issue of state and city disparities in police use of lethal force back in 2015 (and every year since then). This was after the Washington Post and Fatal Encounters starting keeping a reliable count of those killed by police (starting in late 2014, after Michael Brown was killed in Ferguson). My goal has always been to have others do more thorough statistical analysis, so I’m happy it’s finally getting some attention.
What Nix does is raise some very serious and legitimate issues about what can be gained from this kind of analysis. Not to say one shouldn’t look at those killed by police, but any conclusions need to come with much larger caveats.
Nix’s first issue, however, is with Schwartz and Jahn’s term “police violence” (this and all subsequent quotes are from Nix):
Use of the term “police violence” has the potential to mislead readers who believe that police use of deadly force is rampant and usually unjustified (e.g., those who view police as “vigilantes” or “oppressors”). It also has the potential to drive away readers who understand how statistically rare police use of deadly force is, and that it usually occurs in response to violence (e.g., police officers themselves, and those who view police as “professionals”).
We in the field tend to use “police use of deadly force.” Why does this matter? Because, as Nix says, “‘police violence’ assigns all responsibility to officers, as if none of the citizens involved contributed in any way to the violence.” This contributes to a certain anti-policing narrative. But also “police violence” groups incidents in which police need to use deadly force together with incidents in which police shouldn’t have used deadly force. If cops shoot a shooter actively killing people, it seems a bit unfair to focus on the race of the shooter as causal factor.
But there is a more serious statistical problem with using police-involved killings as indicative of deadly-force situations. The problem is that “killed by police” is not a reliable and instructive variable even with regards to deadly-force encounters with police! And given that US police have literally hundreds of millions of interactions with people, fatal shootings are really rare (about 1,000 per year).
The number of times in which police might use deadly force (pointing a gun at somebody) and the number of times police do use deadly force but do not kill (more on that, later) far outweigh the times somebody is shot and killed. So we’re studying a very rare phenomenon, extreme outliers, while also missing most of the cases of exactly the phenomenon we’re trying to study.
The numerators of Schwartz and Jahn’s fatality rates are a nonrandom sample of all deadly force incidents that occurred from 2013 to 2017. To be sure, there is some degree of chance in whether a person who is shot lives or dies (e.g., whether bullets pierce a vital organ). But part of the variation across MSAs both in terms of rates of police-involved fatalities and racial disparities therein might be driven by nonrandom factors apart from police behavior. [italics added]
That’s serious, if your focus (or blame) is on police behavior. When people, myself included, compare fatal police shootings in, say, Las Vegas, Nevada, and St. Louis, Missouri, we’re assume that fatal shootings represent the number of times cops shoot people. But that’s not the case. At least in the limited cities we do know about. In some given time frame, Las Vegas police killed 47 people while St. Louis PD killed 20. So it seems like Las Vegas police shoot more than twice as often as St Louis police. But both police departments shoot just as often. The Las Vegas fatality rate (chance of dying after being shot by cops) is 41% while in St. Louis it is just 17%. And this doesn’t begin to take into account police-involved shootings where cops simply miss. (I’d guess, very roughly, based only on the NYPD concept of “object completion,” that that may be somewhere between 10% and 20% of the incidents in which police shoot.)
Similarly, based on people killed by police, Boston and Atlanta seem to shoot the name amount (not rate, just number). But in reality Atlanta police are 3 times as likely to shoot. The fatality rate after being shot by a cop in Boston is 71% while Atlanta is 24%. Why are these rates are so different? Distance to good trauma care is a proven factor, but that can’t be the only one. And we don’t have national data on this.
This is the “numerator” problem. We’re studying use of lethal force and don’t know how often it happens. Sure, being shot and killed is proxy. But how good of a proxy is it? We don’t know. Hell, only for the past 5 years do we know the number of people cops kill. So we’re forced, like it or not, to use the best data we have, the data we have rather than the data we want. Is this good enough? That’s the question. And Nix makes a good argument that the answer is “no.”
The third of three issues raised by Nix is about the “denominator.” Or, to put it another way, “Yes, the use of force is disproportionate… but compared to what?”
One can only be subjected to police force—including deadly force—conditional on interacting with a police officer in time and space. So how informative is it to calculate police-involved fatality rates for a population that is mostly never at risk?
…
A Venn diagram of the “at risk” and general populations would not perfectly align—instead, the “at risk” circle would be a small circle within the much larger general population circle. Perhaps some comparisons to other phenomena are in order. To estimate maternal mortality rates, researchers do not include all women in the denominator, but instead the number of live births. … To estimate the rate at which sharks bite people, researchers would need to determine who goes into the water. Studies concerned with police use of deadly force must be equally attentive to identifying a meaningful denominator.
In any given year, most people have zero chance of being killed by cops. Why? Because they don’t interact with cops in that given year. If we’re looking at individual police as a source of bias, it doesn’t make much sense to include those who don’t interact with cops. The Bureau of Justice Statistics estimates that in one year about 20% of Americans over 16 had some sort of contact with police.
We know there are racial disparities in policing based on — in addition to any active bias on the part of police — 1) people who call 911, 2) victims of crimes, 3) criminal offenders, and 4) traffic stops. And numbers 3 and 4 are related to 5) disproportionate police presence and activity in high-crime neighborhoods. We do not have reliable data on the rate at which cops interact with people. And even then, all interactions are not equal. Context matters.
The available evidence indicates that both crime reporting and proactive police stops differ systematically across racial groups. … Is the solution to ignore this mediator—which is literally a necessary precondition for being killed by a police officer—and calculate rates for the entire population (most of whom are never at risk)? If the goal is to understand and improve officer decision-making as a way to save lives, then I am not convinced. Stopping a person and using deadly force on a person are two different decision points, with different antecedents, and need to be analyzed as such.
Yes, as we like to say: it’s complicated. Comparing racial disparity to overall population figures does has its use. It describes, well, police use-of-force on a given population. But it does not explain the police part of use-of-force.
To be clear, there is nothing inherently wrong about Schwartz and Jahn’s use of population denominators in their analysis, so long as readers bear in mind there are many factors (including police behavior) that drive the disparities. I am merely pointing out that it produces rates that are not all that helpful in understanding why police-involved fatalities vary across space as they do.
If we are to move beyond shock and outrage that there are racial disparities in police use of lethal force — if we are actually going to reduce the number of people killed by police — we need to understand the “how” and “why” as much as the “what.”
Full citation: Nix J. (2020). “On the challenges associated with the study of police use of deadly force in the United States: A response to Schwartz & Jahn.” PLoS ONE 15(7): e0236158. https://doi.org/10.1371/journal.pone.0236158
I agree that we are drawing conclusions from sparse data. Never a good idea.
But how do we fix the lack of data?
I cringe when proposals are made for small data sets. The more data we have the more meaningful the answers will be. Yes we need medical response data too.
How can we fix this? I’m actively searching for answers.
Short of our government collecting good data, private foundations need to fund data collection. That’s all I can think of.