On facial recognition technology, can we regulate our way to safety?
The idea of facial recognition technology puts many people on edge, even those who aren’t ordinarily privacy hawks. One reason is its inaccuracy, especially for people who are not white men, making it a dangerous tool to use in law enforcement. But even if it were perfectly accurate, facial recognition could still be used to violate people’s rights. Last year, a Daily Beast investigation found that Amazon was actively pitching its face surveillance platform to ICE, to help them crack down on migrant communities. An Intercept investigation found that IBM, as part of a long-term partnership with the New York Police Department, developed the ability to determine the ethnicity of faces, a technology that was then tested in public surveillance cameras without alerting city residents. In Britain, the Metropolitan Police uses face recognition to scan crowds for people on watch lists, and China uses it for mass surveillance, including to track dissidents. [Karen Hao / The Algorithm from MIT Technology Review] The private sector, too, is using these technologies to track customers’ shopping habits, including breakdowns of gender, race, and mood.
The dangers are clear, but the solutions are not: The past few months have seen attempts to stop the march toward facial recognition ubiquity on the part of some tech giants and some governments, and the only thing that seems clear is that cooperation from both groups will be necessary to protect the public.
On the industry side, Google recently agreed to hold off on releasing facial recognition technology to demonstrate its commitment to the responsible use of artificial intelligence. “Unlike some other companies, Google Cloud has chosen not to offer general-purpose facial recognition APIs before working through important technology and policy questions,” said Kent Walker, Google’s senior vice president of global affairs. The “other companies” he mentions may have included Amazon, which is facing pushback from employees and, recently, shareholders for trying to profit from its Rekognition technology. “We provide our Rekognition service to a variety of government agencies, and we think that the federal government should have access to the best available technology,” Brian Huseman, Amazon’s vice president for public policy, told the New York City Council. The company also recently filed a patent application that seeks to combine its Ring doorbell technology with facial recognition, allowing users to add faces to a list of “suspicious” people and send it directly to law enforcement. [Levi Sumagaysay / San Jose Mercury News]
Around the same time, Microsoft’s president, Brad Smith, released a statement urging governments “to start adopting laws to regulate this technology. The facial recognition genie, so to speak, is just emerging from the bottle. Unless we act, we risk waking up five years from now to find that facial recognition services have spread in ways that exacerbate societal issues. By that time, these challenges will be much more difficult to bottle back up.” Smith feared “a commercial race to the bottom, with tech companies forced to choose between social responsibility and market success. We believe that the only way to protect against this race to the bottom is to build a floor of responsibility that supports healthy market competition. And a solid floor requires that we ensure that this technology, and the organizations that develop and use it, are governed by the rule of law.” [Brad Smith / Microsoft]
Now, a San Francisco lawmaker is proposing the first complete moratorium on local government use of facial recognition technology. The Stop Secret Surveillance Ordinance, introduced by city Supervisor Aaron Peskin, “would ban all city departments from using facial recognition technology and require board approval before departments purchase new surveillance devices,” writes Sidney Fussell for The Atlantic. Berkeley and Oakland have passed similar rules that require public input and a privacy policy before implementing new face recognition technology, and Texas and Illinois require consent before collecting facial data, but nowhere in the country has an outright ban. Bans have been proposed in Washington State and Massachusetts, but have not yet been ratified. [Sidney Fussell / The Atlantic]
The proposal also bars city officials from using any data sourced from facial recognition by other agencies. If police in a neighboring city wanted to share a list of suspects derived from facial recognition, the San Francisco Police Department would be prohibited from using it. More broadly, the ordinance stipulates that any department that wants to purchase new surveillance equipment of any kind must justify this act by submitting a “surveillance technology policy” that explains “what information will be collected with the technology, how long it will be retained, with whom it can be shared, how members of the public can register complaints, and specified authorized and forbidden uses.” Every year, departments would have to justify its helpfulness in crime reduction. [Sidney Fussell / The Atlantic]
But the bill would regulate only use by city government, not private companies: The face-unlock feature included on the latest iPhone model, for example, would still be legal. The San Francisco Police Department would be barred from using facial recognition software to scan video footage for suspects after a shooting, but a grocery store would be permitted to do the same thing to analyze shopper behavior. [Sidney Fussell / The Atlantic] The limits of San Francisco’s ordinance, and the limits of industry self-regulation underscore the need for industry and government to work together on behalf of the greater good. As of now, even with mounting pressure, it seems most are working on behalf of their own good.
|