Just-In: OpenAI Taps Los Alamos National Laboratory To Study AI Safety

OpenAI and Los Alamos National Laboratory join forces to study AI safety in bioscience, assessing advanced AI models like GPT-4 for safe and effective use in laboratory settings.
By Coingapestaff
AI News: OpenAI’s Apple Investment & US Govt Deal Fuel AI Dominance Talks

Highlights

  • OpenAI collaborates with Los Alamos National Laboratory to study AI safety in bioscientific research.
  • The initiative aligns with the White House Executive Order on AI, focusing on evaluating AI capabilities and risks in biological applications.
  • The study will test GPT-4 and its real-time voice systems in lab settings to assess AI's impact on enhancing and safely performing laboratory tasks.

OpenAI has announced a groundbreaking partnership with Los Alamos National Laboratory to study AI safety in bioscientific research. This collaboration marks a significant step in addressing the potential and challenges of advanced AI systems in laboratory settings.

As artificial intelligence continues to transform various fields, this joint effort between a leading AI company and a premier national laboratory highlights the growing importance of balancing technological innovation with safety considerations, particularly in sensitive areas like bioscience.

Advertisement
Advertisement

OpenAI and Los Alamos National Laboratory Join Forces

The partnership aligns with the recent White House Executive Order on AI development, which tasks national laboratories with evaluating the capabilities of advanced AI models, including their potential in biological applications. This initiative demonstrates a proactive approach to understanding and mitigating risks associated with AI in scientific research, setting a precedent for responsible AI development in critical fields.

The study will focus on assessing how frontier models like GPT-4 can assist humans in performing tasks in physical laboratory environments. It aims to evaluate the biological safety aspects of GPT-4 and its unreleased real-time voice systems. This evaluation is set to be the first of its kind, testing multimodal frontier models in a lab setting.

The collaboration will assess how both experts and novices perform and troubleshoot standard laboratory tasks with AI assistance. By quantifying how advanced AI models can enhance skills across different levels of expertise in real-world biological tasks, the study seeks to provide valuable insights into the practical applications and potential risks of AI in scientific research.

OpenAI’s approach extends beyond their previous work by incorporating wet lab techniques and multiple modalities, including visual and voice inputs. This comprehensive methodology is designed to offer a more realistic assessment of AI’s potential impact on scientific research and safety protocols, providing a holistic view of AI integration in laboratory settings.

Also Read: This Firm Grabs Major Holdings In BlackRock Bitcoin ETF, GBTC, Crypto Shares

Advertisement
Advertisement

OpenAI Demands NYT Article Creation Details in Court

OpenAI has recently filed a motion in a New York court, requesting The New York Times (NYT) to disclose detailed information about its article creation process. The AI company is seeking access to reporters’ notes, interview records, and other source materials. This legal move is part of OpenAI’s defense against the NYT’s allegations that the company used its content without authorization to train AI models.

OpenAI argues that understanding the NYT’s journalistic process is crucial to determine the originality and authorship of the articles in question. The court filing challenges the NYT’s claims of substantial investment in high-quality journalism, with OpenAI’s lawyers asserting that transparency is necessary for a fair judgment. This case could have significant implications for intellectual property rights in the context of AI development and media content use.

Also Read: Ethereum Investors Are Ready To Sell ETH As It Hits $3.2K

Advertisement
Coingapestaff
CoinGape comprises an experienced team of native content writers and editors working round the clock to cover news globally and present news as a fact rather than an opinion. CoinGape writers and reporters contributed to this article.
Why trust CoinGape: CoinGape has covered the cryptocurrency industry since 2017, aiming to provide informative insights to our readers. Our journalists and analysts bring years of experience in market analysis and blockchain technology to ensure factual accuracy and balanced reporting. By following our Editorial Policy, our writers verify every source, fact-check each story, rely on reputable sources, and attribute quotes and media correctly. We also follow a rigorous Review Methodology when evaluating exchanges and tools. From emerging blockchain projects and coin launches to industry events and technical developments, we cover all facets of the digital asset space with unwavering commitment to timely, relevant information.
Investment disclaimer: The content reflects the author’s personal views and current market conditions. Please conduct your own research before investing in cryptocurrencies, as neither the author nor the publication is responsible for any financial losses.
Ad Disclosure: This site may feature sponsored content and affiliate links. All advertisements are clearly labeled, and ad partners have no influence over our editorial content.