LAPD retreats from ‘predictive policing,’ for now
A recent episode of the philosophy podcast “Hi-Phi Nation” began at a weekly meeting held by the Los Angeles Police Department’s commissioners, a group of five civilians and volunteers who are appointed by the mayor. That day, they were voting on whether to approve a charitable donation to reconfigure a conference room into a Community Safety Operations Center (CSOC). It sounded innocuous. It was not. Protesters shouted “shame!” until they were ejected. Barry Lam, an associate professor of philosophy at Vassar College and the host of the podcast, said the funding in question was controversial because it represented “a small but symbolic step in LA’s ongoing move toward predictive policing technologies.” Police say that big data is a chance to replace the prejudice of human judgment with impartial data and algorithms. For opponents, algorithmic objectivity is a cover for an “efficiency tool to target, incarcerate, and control racial minorities in a rapidly gentrifying city.” [Barry Lam / Hi-Phi Nation]
“For years, critics have lambasted data-driven programs—which use search tools and point scores—saying statistics tilt toward racial bias and result in heavier policing of black and Latino communities,” writes Mark Puente for the Los Angeles Times. A recent study examined such data programs in Chicago, New Orleans, and Maricopa County, Arizona, concluding that “dirty data” led to biased policing and unlawful predictions. [Mark Puente / Los Angeles Times]
Sarah Brayne, an assistant professor of sociology, embedded herself for years with the LAPD, studying how these new technologies are changing the relationship between the police and the community. One part of the system quantifies civilians according to risk, premised on the idea that a small percentage of high-impact people are disproportionately responsible for most crime. But it tracks a lot more than that. Police use index cards to write down information that comes up in interviews with civilians, not just criminal activity. [Sarah Brayne / American Sociological Review]
The cards list people who happen to be in the car with someone during a stop, someone across the street, a neighbor who walked by. This information is entered into the system daily, and officers can then run the names through Palantir software, which can then give a social network map for an individual, “who in the past they’ve been seen with, cars they’ve driven, where in the neighborhood they’ve been stopped at and so forth,” Lam explains.
In order to determine risk, a system assigns points based on certain factors: five points for someone who was arrested with a handgun, for example, or has a conviction for a violent crime, and one point for every consensual police interview. But this means that if officers stop you and ask for information, and you willingly give it to them, and this happens five times, you will not only be in the system, you will have the same score as a person caught with a gun. “You can see how that might turn into somewhat of a self-fulfilling prophecy [or] a feedback loop,” Brayne said. “Where if you’re going out and specifically seeking out the people with high points values, and then you go and stop those people, and then that increases their points value.” [Barry Lam / Hi-Phi Nation]
“Algorithms like these are often deployed in secret, making it impossible for the public to scrutinize them,” Ben Green, a Ph.D. candidate in applied mathematics, wrote for the Boston Globe recently. “Police in New Orleans quietly used predictive policing algorithms for several years without ever announcing they were doing so. Even members of the City Council were left in the dark.” Chicago’s police department has resisted repeated calls to disclose how its algorithm tries to predict gun violence. “By providing the appearance of a value-neutral solution to policing issues without addressing the underlying problems, these algorithms grease the wheels of an already discriminatory system,” Green writes. “They may make policing more efficient for officers, but they don’t evaluate whether the current system actually helps address social disorder.” Studies by the RAND Corporation “found no statistical evidence that these programs actually reduce crime.” [Ben Green / Boston Globe]
In Los Angeles, big data has backed down. Police Chief Michel Moore announced last month that he plans to scrap the program, bowing to criticism from community groups and a 52-page audit by the inspector general, which “found that the department’s data analysis programs lacked oversight and that officers used inconsistent criteria to label people as ‘chronic offenders,’” reports the Los Angeles Times. It found that 44 percent of those labeled chronic offenders had either zero or one arrest for a violent offense. Many had been accused only of nonviolent crimes. The points system and tracking database had been suspended in August, after an uproar among civil liberties groups. [Mark Puente / Los Angeles Times]
But even if these algorithms were more accurate, they might not be justified. Philosopher Renee Bollinger questions the idea that the likelier something is statistically, the more justification we have for treating it as true. In a recent article, she uses the example of John Hope Franklin, a preeminent historian who was about to be presented with the Presidential Medal of Freedom and hosted a celebratory dinner party at an exclusive club. All the other Black men present were uniformed attendants. A woman saw Franklin, who was Black, mistook him for an attendant, and asked for her coat. Statistically, the woman was perhaps justified in making that assumption, but she could not be certain, and the cost of her error was very high. [Renée Bollinger / Synthese]
In some cases, Bollinger writes, “the severity of the harm of a single, isolated mistake suffices to explain the wrong involved in unjustified acceptance.” In others, however, the wrong arises from the pattern of repeatedly exposing people to risks of harm over time. Often, those in charge of implementing policies such as big data policing or stop-and-frisk focus only on the potential harm they prevent, and have no conception of the harm they inflict.
Big data policing has a fundamental problem beyond the possibility of mistakes: the idea that some people are “criminals” who need to be weeded out. The notion that one could mistake an innocent person for a person prone to criminality seems to misunderstand that people’s behavior is shaped by their environments. Constantly being stopped by police is part of that environment. It can make people feel alienated, like society does not want them; some of them might then become less likely to participate in society in a law-abiding way. Meaningful community investment might likewise encourage law-abiding societal participation.
Anytime police arrest a person, or even detain her for a while in the street, they are imposing a penalty. They take her freedom, her time, and her dignity. As law professor Adam Kolber reminds us in his article “Punishment and Moral Risk,” if we as a society want to harm people through punishment, we need to find ways to do so that are morally permissible. For some, this means a high degree of certainty that the person is morally “deserving” of that punishment, and that the punishment is reasonable. For others, it would mean certainty that the action will make society safer. But, as studies have shown, big data policing, which ensnares plenty of people with no criminal involvement, and has not been shown to increase safety, has so far failed on both.
|