OpenAI is planning to expand its global affairs team from 35 to 50 by the end of the year, in response to growing regulatory concerns in the AI industry.
The global affairs department has (and will begin to hire) members in countries such as Belgium, the UK, Ireland, France, Singapore, India, Brazil, and the US, which have more advanced AI regulations.
The team will be tasked with influencing global AI regulations, which involves addressing regulatory challenges, engaging with key policymakers, and shaping new AI laws, such as the recent EU AI Act, in which they were heavily involved (they successfully lobbied against the EU‘s decision to classify some of its models as "high risk", so they didn’t have to face heavy restrictions).
“The team will create laws that not only let us innovate and bring beneficial technology to people but also end up in a world where the technology is safe." – David Robinson, OpenAI Head of Policy Planning
This movement comes as OpenAI, and other big tech firms such as Meta and Google, face increasing scrutiny from the Department of Justice, Federal Trade Commission, and EU and UK regulators over data privacy, user security, and antitrust issues associated with the development and training of their AI models.
Although, OpenAI claims that they’re doing this not because they “need to quash regulations … because we don’t have a goal of maximizing profit,” they’re doing it because they “have a goal of making sure that AGI benefits all of humanity.”
Despite showing its commitment to helping shape, govern, and implement AI legislation to “benefit all of humanity”, OpenAI has (to date) traditionally spent less than its rivals on global affairs and lobbying, spending only $340,000 on US government initiatives in Q1, compared to $3.1M by Google and $7.6M by Meta.