lessons from the resignation of the ethics committee of Axon AI | Fenwick & West LLP

[co-author: Sydney Veatch]

Days after the deaths of 19 children and two teachers in Uvalde, Texas, Axon Enterprise, a leading provider of law enforcement technology solutions including the Taser, announced plans to develop ” non-lethal armed drones,” dubbed Taser Drones, which Axon claimed could be installed in schools to combat mass shootings. However, Axon failed to consult with its AI ethics committee before announcing the development of the Taser drone, resulting in the resignation of nine of the 13 board members. In their joint resignation letter, the nine members noted that the board had had lengthy discussions in the past with Axon about similar technology and had voted against Axon moving forward, even at limited conditions.

As AI capabilities continue to expand, the development of law enforcement-related and weaponized AI has been and will likely remain controversial. Axon aimed to redress this controversy by creating its own AI ethics committee. Formed in 2018, the council has drawn the attention of more than 40 civil rights groups who have urged it to ban the development and deployment of certain capabilities, such as real-time facial recognition. Since then, the board has issued three annual reports detailing its recommendations to Axon on other important matters. It’s unclear why Axon didn’t continue to consult with its AI ethics board before announcing development of the Taser drone, especially given previous reports from the board that Axon had been open to suggestions. of the council, even when the recommendations were negative regarding the proposed developments. Shortly after the resignations and considerable negative press, Axon announced that it was suspending development of the Taser drone and claimed the original announcement was to “start a conversation about a potential solution” and not “a launch timeline. real”.

These resignations and the public relations backlash should remind all companies with AI ethics boards that creating such a board is only the first step; the company should have processes in place to consult with its AI ethics committee before major product decisions, as well as give serious weight to the committee’s recommendations. The purpose of an AI ethics committee is not to align and justify company actions, it is to challenge the company and push for the development of truly ethical products. A recent example of a company taking ethics recommendations seriously comes from Microsoft, which just retired its facial analysis capabilities that claimed to infer emotional states and identify attributes such as gender and age in order to meet the requirements. of its new responsible AI standard.

The most important takeaway from this story is that companies tackling AI-related risk issues (among other concerns) must not only establish appropriate policies and practices, but also use them. Whether it’s privacy, trade secrets, HR issues, or AI risk, having a policy and not complying with it goes against the principles that drove the creation of the politics first.

About Robert Wright

Check Also

The FAA has asked for comment on the small airplane seats. Will they grow?

Comment this story Comment ” Cramped “. “Unsafe.” “Torture.” Many of the more than 26,000 …