Breaking: OpenAI Forms New Committee for Safety and Security
Highlights
- OpenAI forms Safety Committee, set to issue first guidelines in 90 days.
- Safety Committee led by Bret Taylor, includes CEO Sam Altman.
- Committee response to AI safety debates and internal critiques.
OpenAI has announced the establishment of a new Safety and Security Committee. This strategic move is aimed at positioning the organization to make key safety and security decisions about its projects and operations.
The committee will be instrumental in recommending procedures to the full board as well as putting in place efficient processes within OpenAI’s developmental frameworks especially as the company moves to train its next frontier model.
OpenAI Introduces Safety and Security Oversight
This new committee is led by Bret Taylor and members include Sam Altman who is the CEO of OpenAI, Adam D’Angelo, and Nicole Seligman. This team will first be tasked with assessing and improving the safety and security of OpenAI.
They are expected to come up with their first report in the next 90 days, which will be vital in determining the safety measures of OpenAI projects. The formation of this committee is a sign that OpenAI is keen on ensuring high safety levels as it seeks to achieve better artificial intelligence technologies.
OpenAI Board forms Safety and Security Committee, responsible for making recommendations on critical safety and security decisions for all OpenAI projects. https://t.co/tsTybFIl7o
— OpenAI (@OpenAI) May 28, 2024
This comes after the recent commencement of training on the latest OpenAI AI model that seeks to replace the GPT-4 system that is currently in use in its ChatGPT chatbot. The organization has stated its commitment to being at the forefront not only in capability but in safety, which shows a positive outlook towards the potential dangers of AI creation.
What Led to This Move?
The formation of the Safety and Security Committee is rather timely given that the safety of AI is now emerging as a major topic of discussion among the technological fraternity.
Some have interpreted OpenAI’s decision to make this committee official as a reaction to the ongoing controversies and discussions on AI safety standards, particularly after some of its employees resigned or publicly criticized the organization.
Jan Leike, an ex-employee at OpenAI, has previously expressed his concerns regarding the company, pointing out that product development seems to be valued more than the safety measures.
This new committee is a part of the steps OpenAI is taking to maintain the innovative character of the project while keeping safety as one of the main priorities in the project development process.
Read Also: Wall Street Reverts To T+1 Settlement, What It Means For Crypto
- XRP Realized Losses Spike to Highest Level Since 2022, Will Price Rally Again?
- Crypto Market Rises as U.S. and Iran Reach Key Agreement On Nuclear Talks
- Trump Tariffs: U.S. Raises Global Tariff Rate To 15% Following Supreme Court Ruling
- Bitwise CIO Names BTC, ETH, SOL, and LINK as ‘Mount Rushmore’ of Crypto Amid Market Weakness
- Prediction Market News: Kalshi Faces New Lawsuit Amid State Regulatory Crackdown
- Dogecoin Price Prediction Feb 2026: Will DOGE Break $0.20 This month?
- XRP Price Prediction As SBI Introduces Tokenized Bonds With Crypto Rewards
- Ethereum Price Rises After SCOTUS Ruling: Here’s Why a Drop to $1,500 is Possible
- Will Pi Network Price See a Surge After the Mainnet Launch Anniversary?
- Bitcoin and XRP Price Prediction As White House Sets March 1st Deadline to Advance Clarity Act
- Top 3 Price Predictions Feb 2026 for Solana, Bitcoin, Pi Network as Odds of Trump Attacking Iran Rise





