Highlights
- OpenAI has added General Paul Nakasone to its Security and Safety Committee
- While OpenAI's goal is to enhance user's safety, the move is fueling intense backlash
- OpenAI has been under the crosshairs of critics lately per its deal with Apple
Renowned Artificial Intelligence (AI) giant OpenAI is taking the security of its Large Language Models (LLMs) more seriously and has now appointed former United States Army General Paul M. Nakasone to its Board of Directors.
U.S General Brings Cybersecurity Experience to OpenAI
General Nakasone retired at the beginning of 2024 after serving in different capacities across all levels of the U.S Army.
He played an important role in the creation of the U.S. Cyber Command. He was the longest-serving leader of USCYBERCOM where he served as the Commander. Nakasone also led the National Security Agency NSA/CSS and was responsible for safeguarding the United States’ digital infrastructure and advancing the country’s cyber-defense capabilities.
His duties were usually discharged in collaboration with elite cyber units in the United States, the Republic of Korea, Iraq, and Afghanistan.
Hence, General Nakasone will bring onboard all of these experiences and expertise to OpenAI Board’s Safety and Security Committee. This committee is charged with making recommendations on safety and security decisions for all OpenAI projects and operations. The appointment is a reflection of OpenAI’s commitment to uphold the safety and security of its models and its users’ data. It also underscores the growing significance of cybersecurity as the impact of AI technology continues to grow.
According to the Sam Altman-led AI company, Nakasone’s insights is crucial to OpenAI’s efforts towards gaining better understanding of the role of AI in strengthening cybersecurity. This includes the quick detection and response to cybersecurity threats. This could curb the incidence of cyber attacks on hospitals, schools, and financial institutions.
Concerns About Insecurity Amid New Onboarding
Noteworthy, OpenAI announced the launch of this safety and security Committee last month. The team is led by Bret Taylor and includes other members like Altman, Adam D’Angelo, and Nicole Seligman. The decision to introduce the committee came as a result of voiced concerns about AI’s capacity to predispose humans to certain risks.
It is quite ironical for OpenAI as the firm is currently facing intense backlash for hiring general Nakasone. The surveillance experience he bring onboard and the concerns of data monitoring in its deal with Apple has compounded safety concerns in recent times.
Personally for OpenAI, some of its employees have resigned due to controversies surrounding the company. Jan Leike, an ex-employee at OpenAI, once expressed his concerns regarding the company, pointing out that product development seems to be valued more than the safety measures.
Meanwhile, General Nakasone says OpenAI’s mission aligns with his values and experience in public service.
Read More: Federal Reserve Moves Against Evolve Bank, Is Another Regional Bank Collapse In View?
- SEC’s Paul Atkins Pushes for On-Chain Capital Raising Without Uncertainty
- SEC Delays Decision On Staking For BlackRock’s Ethereum ETF
- SEC Delays Decision on Franklin Templeton’s Solana and XRP ETFs
- BNB Hits New ATH As Binance Partners With $1.6T Franklin Templeton
- Crypto Market, S&P 500 Rally as PPI Data Fuels Rate Cut Hopes
- Pump Price Forecast as $12M Buyback Fuels Scarcity — Is $0.01 in Sight?
- SUI Price Prediction as Mysten Labs Meets SEC Ahead of ETF Decision—Is $7.5 Next?
- Can Dogecoin Price Hit $1 as Derivative Volume Jumps Ahead of DOGE ETF Launch
- Bitcoin Price Prediction Eyes $150K as Trump Calls for Aggressive 100 BPS Rate Cut
- Solana Price Prediction: Can Nasdaq Listing and $94M Holdings Propel SOL Toward $400?