Breaking: OpenAI Forms New Committee for Safety and Security
Highlights
- OpenAI forms Safety Committee, set to issue first guidelines in 90 days.
- Safety Committee led by Bret Taylor, includes CEO Sam Altman.
- Committee response to AI safety debates and internal critiques.
OpenAI has announced the establishment of a new Safety and Security Committee. This strategic move is aimed at positioning the organization to make key safety and security decisions about its projects and operations.
The committee will be instrumental in recommending procedures to the full board as well as putting in place efficient processes within OpenAI’s developmental frameworks especially as the company moves to train its next frontier model.
OpenAI Introduces Safety and Security Oversight
This new committee is led by Bret Taylor and members include Sam Altman who is the CEO of OpenAI, Adam D’Angelo, and Nicole Seligman. This team will first be tasked with assessing and improving the safety and security of OpenAI.
They are expected to come up with their first report in the next 90 days, which will be vital in determining the safety measures of OpenAI projects. The formation of this committee is a sign that OpenAI is keen on ensuring high safety levels as it seeks to achieve better artificial intelligence technologies.
OpenAI Board forms Safety and Security Committee, responsible for making recommendations on critical safety and security decisions for all OpenAI projects. https://t.co/tsTybFIl7o
— OpenAI (@OpenAI) May 28, 2024
This comes after the recent commencement of training on the latest OpenAI AI model that seeks to replace the GPT-4 system that is currently in use in its ChatGPT chatbot. The organization has stated its commitment to being at the forefront not only in capability but in safety, which shows a positive outlook towards the potential dangers of AI creation.
What Led to This Move?
The formation of the Safety and Security Committee is rather timely given that the safety of AI is now emerging as a major topic of discussion among the technological fraternity.
Some have interpreted OpenAI’s decision to make this committee official as a reaction to the ongoing controversies and discussions on AI safety standards, particularly after some of its employees resigned or publicly criticized the organization.
Jan Leike, an ex-employee at OpenAI, has previously expressed his concerns regarding the company, pointing out that product development seems to be valued more than the safety measures.
This new committee is a part of the steps OpenAI is taking to maintain the innovative character of the project while keeping safety as one of the main priorities in the project development process.
Read Also: Wall Street Reverts To T+1 Settlement, What It Means For Crypto
- Tom Lee Says Bitcoin Could Hit New ATH In January As Hassett Becomes Favorite For Fed Chair
- 8 Best Crypto Exchanges That Accept PayPal Deposits and Withdrawals
- Jerome Powell Speech Today: What To Expect as Fed Ends QT
- Tom Lee’s BitMine Acquires 96,798 ETH Ahead of Ethereum Fusaka Upgrade
- Schiff Predicts ‘Beginning of the End’ for MSTR as Strategy Eases Bitcoin Sell-Off Fears With $1.44B Reserve
- Pi Network Price Prediction Ahead of December’s 190M Scheduled Unlock
- Dogecoin Price Below $0.15 as Crypto Market Crashes: Will $0.10 Hold?
- Will the Binance Coin Price Rebound as a Key RWA Metric Jumps 99%
- AVAX Price Prediction After Bitwise Files for a Staking ETF — A Rebound Coming?
- Will Chainlink Price Soar to $20 with U.S. Spot ETF Launch?
- Is Pepe Coin Price at Risk After Forming This Bearish Pattern?





