Topics

As Trust in Police Wanes, Cops are Replacing Human Witnesses With Robots

As more people criticize or refuse to cooperate with police, writers Emily Galvin-Almanza and Khalid Alexander argue most departments aren’t taking that criticism to heart—they’re replacing human sources and interactions with computer-generated evidence instead.

This photo shoes a surveillance or CCTV camera mounted to a wall.
Parker Coffman via UnSplash

In the past few years, as we’ve been distracted by historic crises and political disruption, many Americans may not have noticed the deepening of the surveillance state in which they live. Major cities have announced hefty investments in surveillance technologies, such as cameras for subway cars in New York City and aerial drones in Cleveland. Police officers in Nebraska leveraged private Facebook messages to press felony charges against a mother who allegedly helped her daughter access an abortion. And criminal evidence obtained via gunshot detection technologies, such as ShotSpotter, has been used in more than 100 cities, at times contributing to wrongful prosecutions and perpetuating our nation’s toxic history of oversurveillance of communities of color

We should all be spooked. Whether on your street or your social media feed, police are watching, and the American people are not only falling prey, but also footing the bill. 

Unless voters and leaders in the tech world take action, this problem will not go away. From 2015 to 2018 alone, the number of surveillance cameras in the U.S. jumped from 47 million to 70 million—a nearly 50 percent increase. And some projections now say that, as of 2021, that number has reached about 85 million. Even more sinister is what this trend in surveillance technology indicates: a need, on the part of American police, to substitute machines for the consent and collaboration of the public they are supposed to protect. 

As the public’s appetite for police responses to certain situations has declined since George Floyd’s murder, police officers have increasingly relied on facial recognition software, gunshot detection technology, and other automated surveillance technologies (such as triggerfish and stingrays) to maintain control, manufacture probable cause, and arm prosecutors with buckets of “evidence.” For example, with ShotSpotter, police now don’t need a witness to make a 911 call in order to create probable cause for a response. They can simply rely on overinclusive loud-noise detection in lieu of human participation and consent to achieve their desired outcomes.

To regain control over how technology is used in our criminal legal system, we must do three things immediately: 

  1. Demand cops disclose camera locations and functionality in all jurisdictions.
  2. Establish citizen panels to vet how law enforcement uses camera footage and social media monitoring tools. 
  3. Ensure that providers of AI technologies such as ShotSpotter and Clearview AI grant public defenders access to these tools to help stanch these technologies’ role in fueling mass incarceration. 

To understand the link between artificial intelligence and overpolicing, take the case of Michael Williams. In August 2020, Williams was accused of killing a young man from his community after a silent clip of security footage showed him driving through an intersection at the exact moment surveillance microphones allegedly registered a loud noise. According to the AP, prosecutors said the device’s secret algorithm had identified the bang as a gunshot, and Williams spent nearly a year in Cook County Jail for a crime he did not commit. 

Given that, as of 2016, more than a quarter of law enforcement agencies across the country had access to facial recognition technology, it should come as no surprise that this form of surveillance poses similar threats. In 2020, Michigan resident Robert Williams was wrongfully arrested and held for nearly 30 hours due to a flawed facial recognition match, causing him to unwillingly join the ranks of an untold number of people whose lives have collided with the criminal legal system purely due to flawed, nonconsensual monitoring technology.

This issue is not new. Police have long preferred mechanisms of manufacturing consent that lie within their control. Dog searches, for example, have been called “probable cause on four legs” by police and have been seen as a means to give police an excuse to search any person they choose. That pretext gives cops a reason to bypass getting the community’s consent for police activity. 

This seizure of power through manufactured evidence is unacceptable. To ensure local law enforcement officials aren’t abusing surveillance technology, people have to first gain the means to know what police are doing behind the scenes—through, for example, a public approval process for all forms of police surveillance, as activists demanded in Oakland, California, in 2018. Communities can also seek to enact even stricter regulations, like banning facial recognition software, as Bellingham, Washington, did in 2021. If police are sidelining ordinary people by using technology, regular citizens can reclaim a seat at the table through municipal and county government processes, demanding oversight of the sort of technology employed. 

It was, in fact, ordinary people concerned about overpolicing and community safety in San Diego who brought widespread attention in 2020 to a little-known surveillance network of over 3,000 camera-equipped “smart streetlights” around the city—which was being routinely accessed by law enforcement. Approval for the devices passed through City Council with relatively little attention—they’d been pitched as little more than a cost-effective “energy saving” system, rather than the panopticon they truly were. But the network then turned out to be the beginning of then-mayor and aspiring gubernatorial candidate Kevin Faulconer’s push to turn San Diego into a so-called smart city. But what Faulconer considered smart, community members impacted by overpolicing considered dangerous. 

Outraged by the lack of public input, activists formed a diverse anti-surveillance coalition called TRUST SD and demanded an immediate end to Faulconer’s “smart city” experiment. As a result of these efforts, the city placed a moratorium on its multimillion-dollar smart-streetlights program. An ordinance requiring public review of surveillance tech finally became law in August—after activists spent three years organizing and pressuring local officials. And while this was an important win for transparency, it is only the beginning of what promises to be a continued battle against the unscrupulous use of tech in public safety.

When we think about what truly creates safety, it’s not police activity and violent jailing. Instead, offering people resources, support, better healthcare, and the means to lift themselves out of poverty have been shown to improve safety in ways overpolicing simply cannot. How many jobs programs, free clinics, or affordable apartments could a city’s surveillance budget fund? There is much to be learned from the example of San Diego’s path to better citizen protection, but perhaps the most important lesson is that we cannot substitute police tech for the participation of community members in shaping our shared idea of safety.