safety

US government re-launches AI safety tool

The National Institute of Standards and Technology has re-released a tool designed to test the safety of AI models against attacks

Martin Crowley
July 29, 2024

The National Institute of Standards and Technology (NIST) has re-released Dioptra (named after the classical astronomical and surveying instrument) to help government agencies and mid-to-large-sized companies assess, analyze, and monitor AI risks when training their AI models.

“Testing the effects of adversarial attacks on machine learning models is one of the goals of Dioptra, it could help the community, including government agencies and small to medium-sized businesses, conduct evaluations to assess AI developers’ claims about their systems’ performance.”

Alongside Dioptra, NIST has also released documents that outline ways to mitigate dangerous AI. Both are promising steps toward the US and the UK gaining a greater understanding of AI and, therefore, mitigating associated risks, and will give US developers, businesses, and policymakers the tools to navigate the complex safety and performance issues we’re currently facing.