The AI Safety Bill, SB 1047—which was introduced by Senator Scott Weiner, and is designed to regulate the AI industry in California by preventing large AI models from creating events that could harm humanity—has passed the State’s Senate vote, with a voting ratio of 29-9 in favor of the bill. Now, it’s down to Governor Gavin Newsom to decide whether to approve the Bill or throw it out.
Newsom has a lot of weighing up to do between now and September 30th, when he has to give his final vote:
If Newsom approves the bill—and it becomes law—by January 2025, tech companies will be required to write and submit safety reports for their AI models.
By 2026, a 9-person ‘Board of Frontier Models’ will be appointed to review these safety reports and advise the attorney general on which companies/AI models do or don’t comply with SB 1047. The California attorney general will then have the power to stop AI firms from building AI models if the courts find them dangerous.
If he decides to throw the Bill out, tech companies and industry leaders will celebrate, as they’d prefer regulations to be made by federal regulators at a national level, not a state one, likely because federal laws are notoriously less restrictive and often take a long time to come to fruition, giving them more time to develop models without lawful restrictions.