Facial recognition technology is based on algorithms that connect faces in images to actual people.
It’s used to tag friends on Facebook. It’s being tested as a police tool.
Easily deployed in public places, the technology lends itself to monitoring groups—activists, migrants, and people historically subjected to profiling.
Trial and Error
The London Metropolitan Police’s test run of recognition technology proved the software “staggeringly inaccurate” and arguably an affront to legal ethics. The FBI’s software has drawn parallel criticism.
For one thing, the technology has come under fire over race bias. As race-related misidentifications prompt Microsoft and IBM to tackle bias, a critic argues that progress is not served by efforts “to make black people equally visible” to technology that will be “further weaponized against us.”
The Northern California branch of the American Civil Liberties Union spotted Amazon marketing its “Rekognition” software to law enforcement. Rekognition identifies people in crowds and tracks them in real time, checking them against databases containing tens of millions of faces, says Amazon itself. For its part, the ACLU published an announcement saying Amazon’s technology erroneously matched multiple U.S. politicians with alleged criminals.
Yet similar technology is already in common use—not just on Facebook, but also by way of data collection from drivers’ and passport photos. Maryland investigators used it to find a mass shooting suspect.
Foxes and Henhouses?
Scanning people’s faces during live camera monitoring raises the stakes. It heightens concerns about the First Amendment’s protections for freedom of association, not to mention misidentification.
To properly limit government agencies, the companies themselves must face regulation, as Microsoft President Brad Smith has stated. Otherwise, companies will compete to sell, and governments will keep buying new technologies and applying them in controversial ways.
The country’s largest producer of police body cameras, Axon, backed away, for now, from putting facial recognition surveillance software into the cameras it sells. The prospect of body cameras with real-time facial recognition technology raises urgent questions about Fourth Amendment protections against unlawful searches. Axon CEO Rick Smith noted the risk that police could “go wild” with the technology without appropriate regulations.
Governors and Mayors Step Up…
Orlando, Florida bought and tested Amazon’s facial recognition technology. Meanwhile, San Francisco has banned it.
Uses of driver databases for facial recognition systems have been restricted by Hawaii, Maine, Missouri, New Hampshire, Oregon, Vermont, and Washington. Oregon and New Hampshire have enacted specific provisions against facial recognition in police body cameras.
Maine and Vermont have banned law enforcement drones with facial recognition.
Ohio lets only certain officers use facial recognition software, and forbids its use in monitoring the activities of groups.
…And the Debate Continues.
The appropriate balance between public safety and privacy is a red-hot topic now—as it was in 1791, when the Fourth Amendment was enshrined in the Bill of Rights.
Meanwhile, technology will keep advancing in directions we can barely imagine today. Legal experts should keep apprised of the social fairness ramifications—as well as impacts on privacy and protections from unconstitutional searches.