OpenAI today announced the formation of a revamped Safety and Security Committee, following recent departures and the dissolution of its previous oversight body. This move comes as the company doubles down on industry self-governance amid growing calls for external oversight and regulatory scrutiny.
Led by board members and directors, the committee includes CEO Sam Altman alongside industry leaders Bret Taylor,Adam D’Angelo, and Nicole Seligman. It also comprises internal technical and policy experts, signaling OpenAI’s commitment to strengthening safety measures as it trains its next-generation AI model.
The committee’s initial focus will be to evaluate and refine OpenAI’s existing processes and safeguards over the next 90 days. This evaluation will incorporate feedback from external experts, including former NSA cybersecurity director Rob Joyce.
“We are proud of our industry-leading models in both capabilities and safety,” said OpenAI. “However, we welcome robust debate at this critical juncture and are committed to continuous improvement.”
This announcement comes in the wake of management controversies and calls for stricter government regulation from former OpenAI board members. Critics argue that self-regulation alone may be insufficient to address the complex challenges and potential risks associated with increasingly powerful AI systems.
Despite these concerns, OpenAI’s new committee is tasked with the immediate challenge of ensuring the company’s AI safeguards are robust and effective. The success of this effort could have significant implications for the future of AI development and regulation.
