close
close

Microsoft bans US police from using Azure OpenAI for facial recognition

Microsoft is more explicitly prohibiting police departments from using its AI models to identify suspects, according to new conduct language for its Azure OpenAI collaboration.

The new language explicitly prohibits the use of its AI model services “for facial recognition purposes by or for a police department in the United States.” It also prohibits use cases where any law enforcement officers globally use mobile cameras “in the wild” or where police officers on patrol use body-worn or dash-mounted cameras to verify identities. Microsoft also prohibited the identification of people within a database of suspects or former inmates.

The company’s Azure OpenAI system, which provides API access to the OpenAI language and coding models through Microsoft cloud storage, recently added Chat GPT-4 Turbo with Vision, OpenAI’s advanced text and image analyzer . In February, the company announced it would introduce its generative AI services for use by federal agencies.

Crushable speed of light

SEE ALSO:

ChatGPT now saves chat history even if you’ve opted not to share training data

Microsoft’s Code of Conduct already prohibited the use of the Azure OpenAI system to:

  • identify or verify individual identities based on people’s faces or other physical, physiological or behavioral characteristics; either

  • identify or verify individual identities based on media containing people’s faces or physical, biological or behavioral characteristics.

The new language outlines more specific prohibitions on law enforcement agencies using artificial intelligence systems for data collection. A recent ProPública The report documented the extent to which police departments across the country are implementing similar machine learning, including using AI-powered tools to examine millions of hours of footage from traffic stops and other civilian interactions. “Many of the data collected by these analyzes and the lessons learned from them remain confidential, and the findings are often bound by confidentiality agreements,” the publication wrote. “This echoes the same problem with body camera video: Police departments are still the ones deciding how to use a technology originally intended to make their activities more transparent and hold them accountable for their actions.”

While some actors have taken similar steps to protect user data from law enforcement investigations, including Google’s recent location data privacy protections, others are leaning toward the possibility of collaboration. Last week, Axon, a provider of police cameras and cloud storage, introduced Draft One, an artificial intelligence model that automatically transcribes audio from body cameras to “significantly improve the efficiency of police report writing.”