Technological advancements such as location tracking and DNA testing over the past 20 years have contributed to law enforcement’s ability to close a criminal investigation. But their use of facial recognition software in recent years has resulted in the wrongful arrests of seven Black people — foreshadowing another potential form of racial discrimination in the criminal justice system, critics say.
“Facial recognition is one of those things that we jumped on too quickly and it kind of just took over before we even knew it,” said Thaddeus Johnson, an assistant professor of criminology and criminal justice at Georgia State University who, along with his colleague Natasha Johnson, published the only empirical research on facial recognition last October.
Like any form of technology, facial recognition — a form of artificial intelligence — likely will improve as it updates and evolves. Law enforcement’s use of it without thorough empirical research, however, may continue to be a threat against Black people and those with darker skin tones because the technology is unable to accurately distinguish facial features of different races.
Johnson and Safiya Noble, director of the Center on Race and Digital Justice and author of “Algorithms of Oppression: How Search Engines Reinforce Racism,” help us understand why they are sounding the alarm about law enforcement’s use of facial recognition.
What is facial recognition technology?
The first facial recognition technology was developed in the 1960s by Woodrow Wilson Bledsoe, Helen Chan Wolf, and Charles Bisson, who had an idea to have computers programmed to recognize faces. Bledsoe, a mathematician, received funding from the CIA to create the system, Observer reported. They programmed 10 photographs of different people, most likely white, into a database and trained the computer to learn how to divide a face into features, then compare the distances between those features to determine a specific face.
Over the next 60 years, facial recognition became more sophisticated in identifying skin textures and using 3D images. Now, the software has the ability to comb through more than 15 million profiles in the FBI’s National DNA Index System, as well as databases created by facial recognition companies that scrub the internet and social media for the faces of billions of people.
Those software upgrades also contributed to today’s biometric screenings or fingerprint access to cellphone applications and ATM machines. Home security systems, closed-circuit surveillance, computers — any equipment with a built-in camera can have facial recognition software installed.
But when it comes to accurately detecting darker skin tones, the technology hasn’t made significant improvements.
“The kinds of people who are often making software, making products coming out of tech corridors around the world, have limited worldviews and lack exposure to lots of different kinds of people … and we see that in every industry,” Noble said.
Johnson said that facial recognition software’s algorithms are more than likely unconsciously biased to recognize features familiar to their programmer, who is most likely a white man. When the program is put to work in real life, it is most likely comparing images to databases that contain more Black and brown faces than the white ones it’s trained to recognize.
Without cultural education or exposure to different races and ethnicities, Noble said, software programmers will continue to create flawed facial recognition technology that in the long run will do more harm than good.
How is facial recognition used?
Facial recognition wasn’t tested in the real world until 2001, when federal and local law enforcement in Florida’s Tampa Bay area used it during that year’s Super Bowl. It’s unclear why they decided to experiment at this event rather than other largely attended events such as New Year’s Eve in Times Square.
As the crowd of 71,921 fans entered the stadium, people stood still for their picture to be taken. Without their knowledge, the photographs were entered into a database seeking matches for criminal suspects. During the event, the system detected 19 people with outstanding warrants, but police were not prepared to make those arrests, a detective told The New York Times at the time.
That same year, the city of Tampa accepted a free, one-year trial of the facial recognition software used during the Super Bowl. City officials set up face scanners in their downtown entertainment district but did not find them to be effective because the program didn’t have a database to compare images to, and the software couldn’t keep up with trying to scan moving images on a public street, Vice reported.
The flaws in facial recognition technologies haven’t stopped law enforcement and customer-service based industries from continuing to use it.
Airports, businesses, social media, marketing, and cellphone companies use facial recognition technology for a variety of reasons that can be as insignificant as allowing users of an app to apply filters on photographs.
This year, the Transportation Security Administration announced that it will expand its facial recognition program to more than 400 airports across the country in the coming years. The pilot program, which is currently in 25 airports, has a 97% effective facial matching algorithm “across demographics, including dark skin tones,” a TSA press secretary told Fast Company in June.
Clearview AI is a facial recognition company that provides software to law enforcement and government agencies. Its collection of images amounts to a mega police lineup, critics told Business Insider in April. Clearview AI says it has collected 30 billion images from the internet, Facebook, and other social media — without permission from the social media companies. Cease-and-desist letters were sent by Facebook and other social media companies to Clearview AI for violating users’ privacy.
Critics are also concerned with threats of cybersecurity hackers maliciously breaking into facial recognition databases to steal personal information.
But Noble said whether we like it or not, “everybody’s face is in facial recognition databases with or without their consent if they are on social media. If they have any photos of themselves up anywhere online, including photos they did not post of themselves but that others posted, those are all available to … a variety of different kinds of agencies.”
Johnson said that facial recognition is a very good tool for getting a lead into solving a crime, and its law enforcement use should be restricted to case detectives and investigators. “But the problem is we are so blindly trusting AI that generally the police just use it. … That’s why there needs to be regulations,” he said.
“We are not sure if they’re calibrating their equipment correctly. We’re not sure of the training of the people who are using these technologies. What about officers who have body-worn cameras on that’s doing this real-time recording but are also equipped with this mobile facial recognition? [The officers are] basically a walking and talking constitutional violation of sorts,” Johnson said.
The most well-known case where facial recognition was a leading contributor to accurately identifying suspects was following the Jan. 6, 2021, insurrection upon the U.S. Capitol in Washington, D.C. Federal law enforcement officials were able to identify well over 1,000 mostly white people accused of breaching the U.S. Capitol and assaulting several law enforcement officers, The Washington Post reported. Investigators used facial recognition technology to match the suspects’ images from that day to photographs and videos found of them on social media. In some cases, a state’s Department of Motor Vehicles’ database of driver’s license photos were used to match suspects.
There are no reports of any of the Jan. 6 suspects filing a wrongful arrest lawsuit due to the use of facial recognition.
Are there legal issues at stake?
Legal experts saw the Super Bowl debut of facial recognition technology as a violation of privacy. Those Fourth Amendment concerns persist more than 20 years later, especially since there haven’t been any proposed federal regulations on how to use the technology without violating individuals’ civil rights.
The White House’s Office of Science and Technology Policy released in October 2022 a nonbinding “Blueprint for an AI Bill of Rights” that provides five principles on the “design, use and deployment of automated systems to protect the American public in the age of artificial intelligence.”
But in a December 2022 conversation hosted by the Brookings Center for Technology Innovation, legal experts criticized the White House’s initiative for leaving out guidance for law enforcement agencies’ use of artificial intelligence, specifically facial recognition.
“Excluding law enforcement may continue the oversurveillance of certain populations, communities, and individuals under the guise of public safety and national security and will not necessarily reduce the history and manifestation of rampant discrimination against people of color and immigrants. If law enforcement were included in the Blueprint provisions and guidance, it could have offered new guardrails and agency for individuals left with little recourse when misidentified and/or scrutinized by existing and emerging AI technologies,” according to commentary of the Brookings Center for Technology Innovation’s online event.
Why does this matter to Black people?
The Jan. 6 investigation could imply that facial recognition works well, but if it continues to misidentify Black people or individuals with darker skin tones, it does not, critics say. The 2018 Gender Shades study showed that off-the-shelf facial recognition software systems that companies and law enforcement use have low efficacy when it comes to detecting Black women’s faces, and Black people in general, but are more reliable for white men’s faces.
“There are already practices and policies that are inequitable and result in inequitable outcomes. Why the hell do we think that facial recognition technology will make that better? No, it only exacerbates those things,” said Johnson, who was previously an acting police captain in Memphis, Tennessee.
Though there aren’t any reported cases of a wrongful conviction connected to the use of facial recognition, since 2018 there have been six Black men and a Black woman who have been subjected to days in jail after a facial recognition match falsely connected them to felony-level crimes. In the years to follow, police departments within predominantly Black cities in Louisiana, Maryland, Michigan and New Jersey have been accused of and sued for false arrests due to the use of facial recognition technology.
“The number of people who are ensnared relative to the millions of people for whom there’s no problem means that the seven people who are falsely accused or imprisoned are just kind of like collateral damage to these companies,” Noble said. “And I’m sure they do their calculus on it and say, ‘Well, if we have to settle some lawsuits, it’s cheaper than redesigning the product.’ So we become — our communities become — the collateral damage.”
Apple Inc. was one of the first business entities slapped with a wrongful arrest lawsuit that stemmed from the use of facial recognition to identify a possible suspect in a string of store robberies throughout the Northeast. Ousmane Bah, an 18-year-old college student in New York, sued the tech company for $1 billion after he said he was falsely arrested in 2018. The New York Police Department made the arrest based on a photograph of the possible suspect Apple turned over to police. The police allegedly agreed that the person in the picture did not look like Bah, Business Insider reported. The lawsuit was “voluntarily dismissed, with prejudice against the defendant(s) Apple Inc.” in 2021, according to online federal court records.
One of three cases out of the Detroit Police Department was that of Porcha Woodruff, who at the time of her arrest was eight months pregnant and questioned for 11 hours about robbery and carjacking accusations she knew nothing about. Woodruff, Robert Williams, and Michael Oliver had similar experiences with Detroit police and are each suing.
“I think we should have a moratorium on facial recognition technologies until it can be determined that they are safe and used in ways that are safe. There are many people who think that facial recognition technologies, myself included, should be made illegal because they’re too consequential in the current ways that they’re used. … Bans on facial recognition is actually a public safety imperative,” Noble said.
In March, Democratic Reps. Pramila Jayapal of Washington state and Edward Markey of Massachusetts reintroduced the Facial Recognition and Biometric Technology Moratorium Act to the House. The bill would place a moratorium on law enforcement use of facial recognition until policymakers create regulations and standards that protect constitutional rights and public safety. This is the third time the bill has been presented.
Virginia and New Orleans reversed their short-lived facial recognition bans. In Virginia, lawmakers used the eight-month ban to evaluate the technology and create policies that include having corroborating evidence with a facial recognition match before pursuing the match as a lead.
Johnson said he is currently working on research that explores the possibility of facial recognition being used to further assist in solving crimes and perhaps put an end to the no-snitching culture. Violent crimes such as murder, sexual assault, and hate crimes tend to go unreported and unsolved in Black and brown communities because of historic distrust of the criminal justice system and fear of retaliation.
Theoretically, Johnson said, facial recognition technology can help identify witnesses and victims of crime and amplify the work of police departments across the country, if used correctly.
“It [facial recognition] should be helpful, but we just don’t have enough research, and I’ve cautioned against wildly deploying these things and doing so without even having an inkling of an idea if it has any public safety value, scientifically,” Johnson said.
Capital B is a nonprofit news organization dedicated to uncovering important stories — like this one — about how Black people experience America today. As more and more important information disappears behind paywalls, it’s crucial that we keep our journalism accessible and free for all. But we can’t publish pieces like this without your help. If you support our mission, please consider becoming a member by making a tax-deductible donation. Thank you!