Breaking: OpenAI Forms New Committee for Safety and Security
Highlights
- OpenAI forms Safety Committee, set to issue first guidelines in 90 days.
- Safety Committee led by Bret Taylor, includes CEO Sam Altman.
- Committee response to AI safety debates and internal critiques.
OpenAI has announced the establishment of a new Safety and Security Committee. This strategic move is aimed at positioning the organization to make key safety and security decisions about its projects and operations.
The committee will be instrumental in recommending procedures to the full board as well as putting in place efficient processes within OpenAI’s developmental frameworks especially as the company moves to train its next frontier model.
OpenAI Introduces Safety and Security Oversight
This new committee is led by Bret Taylor and members include Sam Altman who is the CEO of OpenAI, Adam D’Angelo, and Nicole Seligman. This team will first be tasked with assessing and improving the safety and security of OpenAI.
They are expected to come up with their first report in the next 90 days, which will be vital in determining the safety measures of OpenAI projects. The formation of this committee is a sign that OpenAI is keen on ensuring high safety levels as it seeks to achieve better artificial intelligence technologies.
OpenAI Board forms Safety and Security Committee, responsible for making recommendations on critical safety and security decisions for all OpenAI projects. https://t.co/tsTybFIl7o
— OpenAI (@OpenAI) May 28, 2024
This comes after the recent commencement of training on the latest OpenAI AI model that seeks to replace the GPT-4 system that is currently in use in its ChatGPT chatbot. The organization has stated its commitment to being at the forefront not only in capability but in safety, which shows a positive outlook towards the potential dangers of AI creation.
What Led to This Move?
The formation of the Safety and Security Committee is rather timely given that the safety of AI is now emerging as a major topic of discussion among the technological fraternity.
Some have interpreted OpenAI’s decision to make this committee official as a reaction to the ongoing controversies and discussions on AI safety standards, particularly after some of its employees resigned or publicly criticized the organization.
Jan Leike, an ex-employee at OpenAI, has previously expressed his concerns regarding the company, pointing out that product development seems to be valued more than the safety measures.
This new committee is a part of the steps OpenAI is taking to maintain the innovative character of the project while keeping safety as one of the main priorities in the project development process.
Read Also: Wall Street Reverts To T+1 Settlement, What It Means For Crypto
- Hyperliquid Rival Lighter Raises $68 Million at $1.5 Billion Valuation
- $37B Bank SoFi Launches Crypto Trading For Retail Customers
- China’s CVERC Accuses U.S. of Stealing 127k Bitcoin Amid Rising Government Crypto Adoption
- Just-In: Startale Launches Super App for Sony’s Soneium Blockchain Ecosystem
- Breaking: Canary XRP ETF Gets Approval with 8-A Filing to List on Nasdaq
- Cardano Price Could Reclaim $0.7 After Key Stakeholders Add $204M in ADA
- Uniswap Price Soars 21% on Fee Switch and Token Burn Proposal— Eyes $15 Target
- Bitcoin Price Eyes Bulls as Crypto Market Structure Bill Draft Finally Drops
- SUI Price Prediction: Analyst Eyes $20 Amid Bluefin Partnership and 2M Token Lending Deal
- HBAR Price Prediction: Analyst Eyes 90% Surge Amid Taker Buy Dominance
- Expert Predicts 200% Shiba Inu Price Surge as Open Interest, Burn Rate Spikes





