Highlights
Sam Altman-led OpenAI has published the GPT-4o System Card, which contains information about the safety procedures and assessments carried out on the new model GPT-4o.
This is in line with OpenAI’s strategy of ensuring that it is in a vantage position to counter the various risks that are posed by AI as it grows to transform different industries.
The GPT-4o System Card that OpenAI has recently published sheds light on the measures that OpenAI has taken to ensure the safety of its new AI and the possible dangers of the new model. The report touches on various issues, which include the possibility of users growing fond of the AI.
In addition, it touches on the possibility of the AI imbibing prejudices from the society, and the possibility of it being used to produce undesirable things such as fake news or illicit substances.The document also reveals measures that OpenAI has put in place to manage these risks such as post-training techniques, output classifiers, and vigorous moderation practices.
The System Card also contains the Preparedness Framework evaluations which are a crucial part of the model’s assessment on safety based on certain criteria. Specifically, the assessment identified that GPT-4o’s persuasion functionalities presented a borderline medium risk, which OpenAI has mitigated in certain ways.
According to Joaquin Quiñonero Candela, the head of preparedness at OpenAI, the firm has plans to continue the research and tracking of these risks especially for practical use.
Anthropomorphization and Emotional Reliance
One of the primary concerns detailed in the GPT-4o System Card is the risk of users anthropomorphizing the AI, particularly with the introduction of the model’s voice mode. This characteristic, through which the AI can mimic human interactions, may result in the users developing an emotional bond with the AI.
The document mentions cases in which users felt emotions more characteristic of interpersonal relationships during testing and called into question the possibility of users over-relying on the AI’s content.
OpenAI admits that these relationships may have positive effects, including giving a sense of belonging to those in need of a companion, on the other hand, they may undermine users’ relations with other people. The company also intends to observe how these interactions develop and, based on internal studies and external research from academics, try to tackle this problem more effectively.
The GPT-4o System Card is a part of OpenAI’s efforts to increase the organization’s accountability and increase public confidence in the company and its AI solutions. The company has come under fire in the last few months, particularly in regards to the dangers presented by more sophisticated AI models.
In sharing the information about its safety assessment and the measures it has taken to minimize the risks, OpenAI seeks to calm down the public and its stakeholders that it is being responsible in developing AI.
In addition, OpenAI has also put in place several measures to control the voice aspect of the model; for instance, to prevent the production of copyrighted content and to block out any output that has violent or erotic language. These steps demonstrate the organization’s planning in tackling the new issues raised by the new AI model of the company.
REX Shares and Osprey Funds have announced the official launch of their Dogecoin and XRP…
A prominent analyst has projected that XRP could climb as high as $6 by November.…
Ethereum could face a choppy path ahead as whales continue to book profits when ETH…
A crypto expert has predicted a 138% rally for the Shiba Inu price in this…
Twenty One Capital CEO has projected that Bitcoin could increase by 200-fold in the coming…
Binance's BNB Chain is becoming the popular choice for the tokenization of real-world assets (RWAs),…