OpenAI Executive Jan Leike Resigns, Calls for Stronger AGI Safety Measures
Highlights
- Jan Leike resigns from OpenAI, citing AI safety concerns amid product focus.
- OpenAI dissolves Superalignment team post-Leike and Sutskever departures.
- OpenAI CEO commits to AI safety after top researchers exit.
Jan Leike, the head of alignment in OpenAI and the leader of ‘Superalignment’ team has left this company due to his worries about its priorities which he thinks are more focused on product development than AI safety.
Leike made a public announcement of his resignation on May 17, through a series of posts on the social media platform X, which was previously known as Twitter. He stated that the OpenAI leadership was wrong in their choice of core priorities and they should place more emphasis on safety and preparedness as AGI development is moving forward.
Jan Leike’s Safety Concerns and Internal Disagreements
Leike, who had been with OpenAI for about three years, pointed out in his posts that the culture and processes around AI safety were being neglected by the development of “shiny products”. He expressed concern over the allocation of resources, saying that his team needed help to get the necessary computing power to carry out important safety research.
Building smarter-than-human machines is an inherently dangerous endeavor.
OpenAI is shouldering an enormous responsibility on behalf of all of humanity.
— Jan Leike (@janleike) May 17, 2024
“The construction of machines that are smarter than humans is a risky task by itself,” Leike quoted, thus indicating the OpenAI’s responsibility for humanity.
His resignation came almost at the same time as Ilya Sutskever’s departure. Sutskever, the co-leader of the ‘Superalignment’ team and OpenAI’s chief scientist, had already announced his resignation a few days earlier. The departure of Sutskever was a noticeable since he co-founded OpenAI and participated in various research projects, including the development of ChatGPT.
Dissolution of the Superalignment Team
In view of the recent resignations, OpenAI has decided to disband the ‘Superalignment’ team and its functions will be merged with other research projects in the company. Bloomberg told that this decision is a result of the internal restructuring which has been going on since the governance crisis in November 2023 when CEO Sam Altman was temporarily removed and President Greg Brockman lost his chairmanship.
The ‘Superalignment’ team, created to deal with the existential risks that were brought by advanced AI systems and was responsible for developing solutions of controlling and steering superintelligent AI. Their work was considered as the most important in making preparations for the next generations of AI models.
Although the team was dissolved, OpenAI has promised that research on long-term AI risks will go on under the direction of John Schulman, who is leading also a team which develops how to fine-tune AI models after training.
OpenAI’s Current Trajectory and Prospects
Leike and Sutskever’s resignations, together with the disbanding of ‘Superalignment’ team, have been followed by a high-level scrutiny on AI safety and governance at OpenAI. This is the result of a long period of contemplation and disagreement, especially after Sam Altman was dismissed and then later rehired.
The departures and restructuring indicate that OpenAI may not be committed to the safety as it continues to develop and release advanced AI models. Lately, OpenAI has introduced a new “multimodal” AI model, GPT-4o, which can interact with humans more naturally and in an almost human-like way. Although this achievement proves OpenAI’s technological skills, it also reveals the ethical issues concerning privacy, emotional manipulation, and cybersecurity risks.
Though there is a lot of commotion, OpenAI still sticks to the main goal which is to create AGI safely and for the good of humanity. In a post on X, Sam Altman, OpenAI’s CEO admitted Leike’s work and stressed the company’s commitment to AI safety.
“I’m very grateful to @janleike for his great contributions to OpenAI’s alignment research and safety culture, and I am really sad that he is leaving. He’s right we have a lot more work to do; we are determined to do it. I will post my longer version in the next couple of days,” Altman wrote.
Read Also: Binance Pushes For SHIB, USTC, AGIX Liquidity and Trading Boost
- Bitget’s Tokenized Stock Platform Hits $18B as Traditional Assets Move On-Chain
- XRP News: RLUSD Gets Institutional Adoption Boost as Ripple Invests $150M in LMAX Group
- Gemini Card Review 2026 – Best Suited To Earn Rewards In Crypto
- Breaking: CME To Launch Cardano, Chainlink, and XLM Futures Amid Plans For 24/7 Crypto Trading
- Breaking: U.S. Jobless Claims Signal Labor Market Rebound as Fed Set to Hold Rates at January FOMC
- Here’s Why Ethereum Price Will Hit $4k By End of Jan 2026
- Solana Price Outlook Ahead of the Alpenglow Upgrade
- Bitcoin and XRP Price Prediction As US Senate Cancels Crypto Market Structure Bill Markup
- Dogecoin Price Poised to Hit $0.18 After Bullish Inverse Head and Shoulders Breakout
- Bitcoin Price Forecast: How the Supreme Court Tariff Decision Could Affect BTC Price
- Ethereum Price Prediction as Network Activity Hits ATH Ahead of CLARITY Markup





