Eleven current and former employees from OpenAI, alongside two from Google DeepMind, have signed an open letter–called “A Right to Warn about Advanced Artificial Intelligence”--which voices their concerns about the lack of safety governance and oversight from big tech companies within the industry and asks for better protection for whistleblowers who want to speak out about these concerns.
The letter states that AI companies have “substantial non-public information” relating to capabilities, limitations, and risks associated with their AI models, including the “loss of control of control of autonomous AI systems potentially resulting in human extinction”, but “have weak obligations to share this information with governments and society” and “strong financial incentives” to avoid effective oversight measures.
It also states that there are insufficient protections for whistleblowers who are one of the few uniquely positioned to hold these big tech companies accountable.
“Ordinary whistleblower protections are insufficient because they focus on illegal activity, whereas many of the risks we are concerned about are not yet regulated.”
It asks AI companies to commit to four principles, that will:
- Stop them from forcing employees to sign non-disclosure agreements that prevent them from criticizing their employees for risk-related issues.
- Make them create an anonymous process for employees to raise any concerns to board members, regulators
- Create a “culture of open criticism”
- Prevent them from disciplining or retaliating against current and former employees who have shared “risk-related confidential information after other processes have failed.”
The letter comes after OpenAI was recently slammed for forcing employees to sign a non-disclosure agreement, or risk losing the equity they’ve earned while at the company. Although CEO, Sam Altman, has since apologized and promised to change its off-boarding protocol.
It also follows OpenAI’s disbandment of its “Superalignment” safety team, after two key members quit citing safety concerns, and a lack of safety prioritization.
OpenAI has defended its safety practices, claiming to be “proud of their track record with providing the most capable and safest AI systems” and agreeing to “continue to engage with governments, civil society and other communities around the world”, whereas Google has yet to comment.