At the AI Safety Summit in Seoul, the following AI companies agreed to commit to a set of global standards for AI safety, called the Frontier AI Safety Commitments:
- Amazon
- Anthropic
- Cohere
- IBM
- Inflection AI
- Meta
- Microsoft
- Mistral AI
- Open AI
- Samsung
- Technology Innovation Institute
- xAi
- Zhipu.ai (Zhipu.ai is a Chinese company backed by Alibaba, Ant and Tencent.)
The Frontier AI Safety Commitments have been designed to ensure AI companies develop and deploy their AI models safely and are transparent about how they measure and manage associated risks.
By signing this, these companies are agreeing to publish thresholds that outline what risks they deem unacceptable, how they plan to mitigate risks, and what they will do to ensure their AI developments don’t exceed their established thresholds.
They have also committed to “not develop or deploy a model or system at all” if their mitigations can’t keep the risks below their thresholds.
This is the first time so many AI companies, from across the world, have come together and committed to the same AI safety standards. This not only sets a precedent for global AI safety standards but also ensures that AI is developed safely and that these companies are transparent and take accountability for the risks associated with their developments.
OpenAI has established that they’ve already published a set of safety frameworks–the Preparedness Framework–which they developed and adopted last year, and contains practices that they “actively use and improve upon”.
Some of these include red-teaming and testing before public release, monitoring for abuse, protecting children, and “collaborating with governments and stakeholders to prevent abuse, ensure transparency on AI-generated content, and improve access to accurate voting information.”