Microsoft has changed its terms of service policy to prevent US police departments from using its Azure OpenAI Service for AI-powered facial recognition through its Azure OpenAI Service.
Microsoft made two changes to its terms of service policy to reflect its decision to stop the police from using its services for AI facial recognition purposes:
1. They added a sentence to more strongly establish that “this AI service may not be used for facial recognition purposes by or for police departments in the United States,” (previously, they hadn’t specifically included “for facial recognition purposes”)
2. They included a new bullet point specifically prohibiting the use of real-time facial recognition technology to “identify a person in uncontrolled, wild conditions” on mobile cameras, such as dashcams and body cams.
The ban only applies to the US police, not international police, and doesn’t include facial recognition performed with stationary cameras in controlled environments, like a back office, for example.
These changes come after military tech manufacturer–Axon–released a new product that summarizes audio from body cams. This created an uproar among critics about the potential of the model to serve hallucinations and racial bias, affecting the future of people’s lives. Microsoft is clearly keen to avoid any such-like issues and has changed its policy, accordingly. These changes also come after they recently pitched OpenAI’s image tool, DALL-E, to the Department of Defense (DoD) to help them execute military operations.