The AI Act, the European Union’s risk-based set of laws (the first in the world), designed to regulate the development, use, and application of AI in the EU and protect citizens by making sure AI systems that are used and developed in the EU are safe and trustworthy, is now officially in force.
The AI Act is a set of risk-based set of rules that categorize AI systems according to how risky and potentially harmful they are.
“High-risk” AI, like medical devices, biometric identification systems, critical infrastructure, and law enforcement systems will be strictly regulated and have to follow specific requirements surrounding things like risk management and incident tracking before they’re allowed to enter the EU market.
“Minimal-risk” systems like tools used to create social media content will have fewer regulations to comply with, but “limited-risk” applications, such as chatbots, will need to be transparent and include information about how they’re interacting with AI.
“Unacceptable-risk” AI, which includes using AI for things like facial recognition from CCTV, social scoring, cognitive behavioral manipulation, and using biometric data (such as race and sexual orientation) to predict crime, is banned under the new law.
The EU is giving tech companies 4-6 months to comply, if they don’t, they face a fine of either a percentage of their global annual turnover or a predetermined amount, whichever is higher. Depending on the breach, companies could have to pay anywhere between $8.1M or 1% of turnover and $38M or 7% of their annual global turnover.
Although the AI Act is an EU legislation, designed to protect citizens in European countries, tech companies in the US will be greatly affected by these new laws, as a lot of advanced AI systems come from American companies, like OpenAI, Apple, Google, and Meta. The EU believes that ‘general-purpose’ models like ChatGPT and Gemini, which “present unique innovation opportunities” other than just generating text and images, present “challenges to artists, authors, and other creators and the way their creative content is created, distributed, used and consumed,” so these systems will be subjected to stricter regulations over copyright laws, testing, and cybersecurity.
While OpenAI has been preparing for the enforcement of the AI Act by publicly announcing that it will work “closely with the EU AI Office and other relevant authorities as the new law is implemented in the coming months” others, like Meta and Apple, have refused to launch their newest models in Europe because of “the unpredictable nature of the European regulatory environment.”