safety

Apple finally commits to safe AI

Apple, 12 months after others, has signed a voluntary commitment to develop safe AI

Martin Crowley
July 29, 2024

Just ahead of the integration of Apple Intelligence (its new AI platform, revealed last month) into its core products, Apple has signed a White House-backed set of voluntary AI safeguards, committing to the safe and responsible development of AI, 12 months after 15 other tech giants, including Amazon, Anthropic, Google, Inflection, Meta, Microsoft and OpenAI signed them. 

The safeguards are, what the White House is calling, “the first step” toward the development of safe, secure, and trustworthy AI. They require companies to stress-test AI models and safety measures against simulated hackers in a red-teaming environment, before release, and to share the results with the government and public. 

They also ask companies to work on unreleased AI models in secure environments, limiting access to as few employees as possible, and to develop content labeling systems (like watermarking) to clearly establish what content is and isn’t generated by AI.

These safeguards aren’t enforceable by law, so if companies don’t follow these requirements, they aren’t liable for legal action, unlike the recently passed EU ‘AI Act’ (a set of regulations designed to protect the public from high-risk AI) which comes into force next month (August 2nd) and is legally binding.