The Hidden Role of Facial Recognition Tech in Many Arrests

The technology is spreading fast among police, and often wrong. But people charged with crimes are rarely told an algorithm came into play.
face recognition technology concept of big data and security in city with blurred image of people walking and square...
Photograph: Getty Images

In April 2018, Bronx public defender Kaitlin Jackson was assigned to represent a man accused of stealing a pair of socks from a TJ Maxx store. The man said he couldn’t have stolen the socks because at the time the theft occurred, he was at a hospital about three-quarters of a mile away, where his son was born about an hour later.

Jackson couldn’t understand how police had identified and arrested her client months after the theft. She called the Bronx District Attorney’s Office, and a prosecutor told her police had identified her client from a security camera photo using facial recognition. A security guard at the store, the only witness to the theft, later told an investigator from her office that police had sent him a mugshot of her client and asked in a text message “Is this the guy?” Jackson calls that tactic “as suggestive as you can get.”

Jackson’s questions led a judge to order a hearing to determine whether the identification process had been unduly suggestive. Shortly afterward, Jackson says, prosecutors offered her client a deal: Plead guilty to petit larceny in exchange for a sentence of time served. The client, who had been in jail for roughly six months, agreed.

“I would have liked to go forward and go to hearings and go to trial because I think he very likely would have been acquitted, but sitting in jail waiting for that just did not make sense for him, so he ultimately took a misdemeanor plea deal” just to get out of jail, Jackson says. “He just wants to go on with his life.”

The prosecutor who told Jackson how her client had been identified was unusual. Across most of the US, neither police nor prosecutors are required to disclose when facial recognition is used to identify a criminal suspect. Defense attorneys say that puts them at a disadvantage: They can’t challenge potential problems with facial recognition technology if they don’t know it was used. It also raises questions of equity, since studies have shown that facial recognition systems are more likely to misidentify people who are not white men, including people with dark skin, women, and young people.

“Facial recognition technology use shouldn't be a secret,” says Anton Robinson, a former public defender now at the Innocence Project, a nonprofit dedicated to getting people who've been wrongly convicted out of prison. “It's such a big issue in criminal cases. Attorneys shouldn't be left to have these epiphany moments.”

Misidentification is historically a huge factor in sending innocent people to prison. The Innocence Project found that more than two-thirds of people exonerated through DNA evidence had been misidentified by witnesses, making it the leading factor in these convictions. Eyewitnesses can struggle to identify people they don’t know, especially when those individuals are of different racial or ethnic backgrounds.

The rules regulating facial recognition use are gaining importance as more police agencies adopt the technology. In 2016, the Georgetown Center on Privacy and Technology said police in most US states had access to the tech and that photos of about half of US adults were in a facial recognition database. The report also warned that the technology would disproportionately hurt Black people because of the technology's higher error rates for people with dark skin. In a 2019 report, the Georgetown center said New York police had made more than 2,800 arrests following face recognition searches between 2011 and 2017. Last year, BuzzFeed News reported that law enforcement agencies in 49 states, and more than 20 federal agencies, had at least tested facial recognition technology products from Clearview AI.

A handful of US police departments, including in New York City and Detroit, have since adopted policies governing the use of facial recognition. The New York and Detroit policies require two people to review the results of a facial recognition scan before the results are turned over to detectives and say facial recognition alone cannot be used as probable cause to carry out a search warrant or arrest.

The New York policy took effect in March 2020. The latest version requires prosecutors to tell defendants when facial recognition is used to identify them. But defense attorneys say they suspect police are not always adhering to the policy. The NYPD says on its website that the department knows of no cases of false arrest based on the use of facial recognition in an investigation, but the department did not respond to questions about specific cases.

Jackson, the public defender, says police often obscure their use of facial recognition programs by crediting a witness with identifying a suspect. But the witness may have been shown photos generated by a facial recognition program. The use of facial recognition programs “gets papered over by these human identifications that only could have been made with the use of facial recognition,” she says.

Facial recognition searches that lead to criminal charges most commonly begin with an image, often from security cameras. That photo is run through a system that compares the image to those in a large database, like a collection of mugshots or driver’s license photos. Florida’s system includes more than 13 million mugshots and 25 million driver’s license photos. A human analyst reviews the search results and picks out possible matches, which are then given to investigators.

The search results can include hundreds of photos, with confidence scores for each potential match. Investigators show potential matches to an eyewitness or police officer, and if they make a positive identification, they can typically testify at trial without ever mentioning facial recognition.

Facial recognition technology is improving, but it is still flawed. Error rates have fallen 90 percent since the National Institute for Standards and Technology began testing systems in 2018, says Patrick Grother, of NIST’s Image Group that evaluates fingerprint, iris, and facial recognition software. The algorithms are better at analyzing low-quality images and recognizing aging faces, and some have made progress in recognizing faces from the side. Nevertheless, Grother says, “there’s a considerable spectrum of accuracy” and “image quality remains an issue.” NIST’s most recent test, which largely relies on a database of high-quality mugshot photos, found that even the best algorithms can be wrong more than 20 percent of the time.

Another problem: There are few rules governing the images police submit to facial recognition systems. In 2017, New York police believed that a theft suspect looked like Woody Harrelson, so they used a photo of the actor as a probe photo, then arrested the tenth person who appeared in a facial recognition search. Elsewhere, police have submitted artists’ sketches of a suspect to facial recognition systems.

Fighting Facial Recognition in Court

Substances such as DNA found at crime scenes are treated as evidence in criminal investigations, but attorneys and tech policy analysts say they’ve not seen a facial recognition scan used as evidence at trial. Still, the technology may have helped identify a suspect, without the suspect or their legal team having been informed. This has prompted defense attorneys to hunt for hints that the technology was used and to devise strategies to force disclosure.

Jackson, the public defender, has created a guide for the National Association of Criminal Defense Lawyers. She advises attorneys to ask what made detectives suspicious of their client. If the basis of suspicion is unclear, photos or videos are listed as evidence, and their client is identified by a stranger, Jackson says lawyers should suspect the use of facial recognition. Jackson advises lawyers to request supporting materials for an investigation, including a list of all of the candidates returned by a facial recognition system and the confidence scores assigned to them.

False identification with facial recognition led to the arrests of Michael Oliver and Robert Williams in 2019 and 2020, respectively. Attorneys representing the men say they’ve requested lists of all potential matches in those cases as part of lawsuits against police.

“If police picked number 65 produced by the system, the defense should be able to say, ‘What about numbers one through 64?’” says Jumana Musa, director of the Fourth Amendment Center at the National Association of Criminal Defense Lawyers. “Any time a technology or something forensic or science is used in a court, the defense is supposed to have an opportunity to test that, to validate it, to see ‘Does it do what you said it did?’”

Clare Garvie, a former senior associate at Georgetown’s Center on Privacy and Technology, has spent the better part of a decade tracking police use of facial recognition and trained more than 2,000 defense attorneys on how to spot use of the technology. She advises them to look in arrest warrants for the names of companies that make facial recognition technology, police department units like the Facial Identification Section in New York City, or the names of specific police officers.

In her research, Garvie found that some analysts in Nebraska and Florida who were evaluating facial recognition search results were allowed to change the confidence level necessary to create a match. If, for example, a search with 90 percent accuracy returns no results, they can specify a lower accuracy rate and search again.

When defendants push back, police sometimes retreat, as may have happened with Jackson’s case with the stolen socks. Garvie recalls a New York case where a man charged with multiple counts of robbery carrying a possible seven-year sentence was offered a plea deal for 20 hours of community service after a defense attorney requested information about a facial recognition system.

Because many cases are resolved with plea deals, Garvie says there hasn’t been a clear test of whether disclosure is required. Oliver and Williams say they each considered plea deals before they were exonerated. “I think what we're waiting for, unfortunately, is probably a murder or rape case where the prosecution is not willing to plea out or drop charges,” Garvie says.

Signs of Change

There are some signs of change. Laws took effect last year in Utah and Washington state requiring police to disclose the use of facial recognition in criminal cases. The Washington law specifies that police cannot use facial recognition alone to establish probable cause in an investigation; it also requires independent tests of any facial recognition systems used by state agencies. Attorneys in both states said it was too soon to tell whether these laws are having an effect. Several other states are considering similar laws.

A proposed change to a 2021 Massachusetts law would stipulate that all records related to facial recognition searches be turned over to defendants, including other possible matches returned by facial recognition systems and the accuracy rate of predictions made by the tech.

Late last year, a group representing chiefs of police from major US cities, including New York, called for police to disclose when facial recognition is used to help identify a suspect. ​Christian Quinn, a coauthor of the report, is a former major in the Fairfax County Sheriff’s Department in Virginia. He has a background in digital forensics and previously supervised investigators.

Quinn says the spread of facial recognition technology has led investigators to believe there will be suitable digital evidence in every case, similar to the way the TV show CSI led people to believe there would always be DNA or physical forensic evidence. In reality, security camera images can be grainy, low quality, from odd angles, and suffer from lighting issues that hinder a good match.

Given widespread mistrust of police in some areas, “we really need to put it out there and help educate our communities as to the value of this stuff and how we’re using it,” Quinn says. Referring to bans on facial recognition use in some cities, he says it otherwise “becomes very easy to discuss these technologies in terms of all or nothing.” 

As more states and cities consider restricting the technology, a September report by the Center for Strategic and International Studies, a think tank, suggests that Congress create national standards to prevent a patchwork of regulation. Lead author James Lewis says he supports facial recognition and thinks its spread is inevitable but that there should be transparency around how the technology is used in criminal investigations. Seven US states and cities, including Boston and San Francisco, have adopted full or partial bans of facial recognition by government agencies. Lewis doesn’t think Congress will follow suit, in part because of the January 6 attack on the US Capitol and ensuing investigation, saying, “I think that's influential, when you have to hide in a closet.”

An analysis by the Human Rights Law Review at Columbia University concluded that “defendants face meaningful barriers to challenging” the technology and called on Congress to pass a law requiring disclosure. The report also called for procedural safeguards, such as regular testing and a minimum threshold for the accuracy of facial recognition systems.

White House science and tech policy leaders endorsed more disclosure around the use of artificial intelligence as part of an AI Bill of Rights last fall. Regulation of facial recognition technology has drawn bipartisan support in Congress, but there are no federal restrictions on use of the tech by law enforcement, despite a documented lack of guardrails for federal agencies using the tech.

The National District Attorneys Association (NDAA) says it instructs its more than 5,000 members to use “professional judgment and discretion” when it comes to divulging the use of facial recognition and to consider issues like public safety, privacy, and relevance when making these decisions. NDAA officials did not respond to requests for examples of how disclosing facial recognition use in a criminal investigation could threaten public safety.

“The longer things remain secret, the harder it is to challenge them, and the harder it is to challenge them, the longer police go without courts putting limits on what they can do,” says Nathan Wessler, who leads the ​​Speech, Privacy, and Technology Project at the ACLU.

An Attempt to Learn More

Defense attorneys say their best hope of getting police and prosecutors to reveal that facial recognition helped identify a suspect rests on a 1963 Supreme Court decision. In Brady v Maryland, the court ruled that police must turn over to a defendant any evidence they collected that would exonerate that defendant.

The best-known case involving facial recognition and the Brady decision is that of Willie Allen Lynch, a Florida man convicted in 2016 of selling $50 in crack cocaine, in part based on facial recognition, and sentenced to eight years in prison. During his trial, Lynch, who defended himself for a period of time, argued he should be able to cross-examine a crime analyst who had performed the facial recognition scan and sent a single photo of Lynch to investigators. In a pretrial deposition, the analyst testified that she didn’t fully understand how the facial recognition program worked.

In December 2018, a Florida appeals court denied Lynch’s appeal, arguing that he had failed to demonstrate on Brady grounds that documents like pictures of other potential subjects would have changed the outcome of a trial.

Lynch then appealed to the Florida Supreme Court, seeking more information about how facial recognition was used in his case, including pictures of other potential matches and the software behind the algorithm. The appeal was supported by groups including the ACLU, Electronic Frontier Foundation, Georgetown Law Center on Privacy and Technology, and the Innocence Project. They argued that uncertainty around the results of facial recognition analysis should be treated as equivalent to eyewitnesses who said they weren’t sure they would recognize the person who committed a crime. The Florida Supreme Court declined to hear the case.

In the years leading up to the Lynch case, public defenders in Pinellas County, where Lynch was charged, said they had not been told that facial recognition was being used. However, the 2016 Georgetown report found that the Pinellas County Sheriff’s Office maintained a facial recognition system, FACES, that law enforcement agencies across Florida tapped thousands of times a year over the span of 15 years. In December 2021, the Sun-Sentinel and Pulitzer Center reported that Palm Beach County public defenders are rarely notified when police use facial recognition in a criminal investigation and that in Fort Lauderdale and West Palm Beach, FACES is disproportionately used in cases involving Black people.

In New York, judges in at least four cases have declined suspects’ requests for more information about the facial recognition program that contributed to their arrest. Jackson, the public defender in the Bronx, thinks it can be easy for people whose lives are never touched by the criminal justice system to not worry about facial recognition. She says that’s a mistake.

“I think people sometimes feel a sense of ease, like ‘That would never happen to me because I'm not somebody who has had a lot of interactions with the police,’” Jackson says. “But no one can guarantee that you don't look a lot like somebody who committed a crime. Nobody is safe from poor facial recognition technology.”

Updated 3/10/2022 12:25 pm ET: This story has been updated to correct the spelling of Jumana Musa's name.


More Great WIRED Stories