Axon’s Taser-Drone Plans Prompt AI Ethics Board Resignations

The company has backed down on its proposal to address school shootings, but the damage was already done.
A collage with a drone and a taser.
Illustration: Elena Lacey; Getty Images

A majority of Axon’s AI Ethics Board resigned in protest yesterday, following an announcement last week that the company planned to equip drones with Tasers and cameras as a way to end mass shootings in schools.

The company backed down on its proposal Sunday, but the damage had been done. Axon had first asked the advisory board to consider a pilot program to outfit a select number of police departments with Taser drones last year, and again last month. A majority of the AI Ethics Board, which comprises AI ethics experts, law professors, and police reform and civil liberties advocates, opposed it both times. Advisory board chairman Barry Friedman told WIRED that Axon never asked the group to review any scenario involving schools, and that launching the pilot program without addressing previously stated concerns is dismissive of the board and its established process.

In a joint letter of resignation made public today, nine members of the AI Ethics Board said the company appeared to be “trading on the tragedy of recent mass shootings” in Buffalo and Uvalde, Texas. Despite mentioning both mass shootings in a press release announcing the pilot project, Axon CEO Rick Smith denied allegations that the company’s proposal was opportunistic in a Reddit AMA. Smith said a Taser drone could still be years off, but that he envisions 50 to 100 Taser drones in a school, run by trained staff. Ahead of Axon pausing the pilot project, Freidman called it a “poorly thought out idea,” and said that if the idea is unlikely to come to fruition, then Axon’s pitch “distracts the world from real solutions to a serious problem.”

Another signatory to the resignation letter, University of Washington law professor Ryan Calo, calls Axon’s idea to test Taser drones in schools “a very, very bad idea.” Meaningful change to curb gun violence in the United States requires confronting issues like alienation, racism, and widespread access to guns. The deaths of children in Uvalde, Texas, did not happen, Calo says, because the school lacked Tasers.

“If we're going to address the prospect of violence in schools, we all know that there are much better ways to do that,” he says.

The board had earlier expressed concern that weaponized drones could lead to increased use of force by police, especially in communities of color. A report detailing the advisory board’s evaluation of a pilot program was due out this fall.

The real disappointment, Calo says, isn’t that the company didn’t do exactly what the board advised. It’s that Axon announced its Taser-drone plans before the board could fully detail its opposition. “All of a sudden, out of nowhere, the company decided to just abandon that process,” he says. “That’s why it’s so disheartening.”

He finds it tough to imagine that police or trained staff in a school will possess the situational awareness to use a Taser drone judiciously. Even if a drone operator successfully saved the lives of suspects or people in marginalized or vulnerable communities, the technology wouldn’t stay there.

“I think there will be mission creep, and that they will begin to use it in more and more contexts, and I think that the announcement by Axon to use it in a completely different context is proof of that,” Calo says. “A situation where there are ubiquitous cameras and remotely deployed Tasers is not a world that I want to live in. Period.”

Axon’s is the latest external AI ethics board to come in conflict with its associated tech company. Google famously convened and disbanded an AI ethics advisory group in roughly a week in 2019. These panels often operate without clear structure beyond asking members to sign a nondisclosure agreement, and companies can use them for “virtue signaling” rather than substantive input, says Cortnie Abercrombie, founder of the nonprofit AI Truth. Her organization is currently researching best practices for corporate AI ethics.

In Axon’s case, multiple AI Ethics Board members who spoke with WIRED said that the company did have a record of listening to their suggestions, including in a 2019 decision not to deploy facial recognition on body cameras. That made the sudden Taser-drone announcement all the more jarring.

There’s usually conflict in companies between people who understand a technology’s risks and limitations and those who want to make products and profits, says Wael AbdAlmageed, a computer scientist at the University of Southern California who resigned from the Axon AI Ethics Board. If companies like Axon want to take AI ethics seriously, he says, the role of these boards cannot be advisory anymore.

“If the AI Ethics Board says this technology is problematic and the company should not develop products, then they shouldn’t. I know it’s a difficult proposition, but I really think this is how it has to be done,” he says. “We’ve seen problems at Google and other companies for people they hired to talk about AI ethics.”

The AI Ethics Board tried to persuade Axon that it should be responsive to the communities affected by its products, Friedman says, rather than the police who buy them. The company did create a community advisory committee, but Friedman says that until AI ethics boards figure out how to bring local communities into the procurement process, “the vendors of policing technology are going to keep playing to the police.”

Four members of the AI Ethics Board didn’t sign the resignation letter. They include former Seattle police chief Carmen Best, former Los Angeles police chief Charlie Beck, and former California Highway Patrol commissioner Warren Stanley.