Legal

Decision time for Silicon Valley AI safety bill

Whether the controversial California AI safety Bill—SB 047—comes into power or not, is now in the hands of Governor Gavin Newsom

Martin Crowley
September 3, 2024

The AI Safety Bill, SB 1047—which was introduced by Senator Scott Weiner, and is designed to regulate the AI industry in California by preventing large AI models from creating events that could harm humanity—has passed the State’s Senate vote, with a voting ratio of 29-9 in favor of the bill. Now, it’s down to Governor Gavin Newsom to decide whether to approve the Bill or throw it out. 

Newsom has a lot of weighing up to do between now and September 30th, when he has to give his final vote:  

  • Many big tech companies (like OpenAI) and political and influential figures in the AI industry have opposed the bill, claiming it will stifle innovation and drive talent out of the state, during this major AI boom.
  • Others (like Elon Musk) believe it should be passed, as the AI industry needs regulation. 

If Newsom approves the bill—and it becomes law—by January 2025, tech companies will be required to write and submit safety reports for their AI models.

By 2026, a 9-person ‘Board of Frontier Models’ will be appointed to review these safety reports and advise the attorney general on which companies/AI models do or don’t comply with SB 1047. The California attorney general will then have the power to stop AI firms from building AI models if the courts find them dangerous. 

If he decides to throw the Bill out, tech companies and industry leaders will celebrate, as they’d prefer regulations to be made by federal regulators at a national level, not a state one, likely because federal laws are notoriously less restrictive and often take a long time to come to fruition, giving them more time to develop models without lawful restrictions.