AI Images From OpenAI To Carry Branded Metadata, Here’s Why
Highlights
- OpenAI will now include metadata in image generated using its tools
- This move is in compliance to growing privacy concerns
- OpenAI has been doing all it can to adjust its offering for safe web usage
OpenAI has added innovation to images produced from its Artificial Intelligence (AI) image generator DALL.E 3 and other tools as they now have metadata integrated into them.
OpenAI Driving Transparency With Metadata
Per an OpenAI post on X, the AI firm confirmed that beyond the included metadata on DALL.E 3 images, those generated on ChatGPT as well will bear the tag. The addition of the metadata was made possible via the use of C2PA specification, an open technical standard that provides publishers, companies, and others with the opportunity to embed metadata in media.
Integrating metadata makes it possible to verify the origin of any image and this is one of the benefits that OpenAI is pursuing. The company believes that the metadata on its images will help individuals, social media platforms, and even content distributors to easily identify that the media is from OpenAI. Other related information could also be assessed via an image’s metadata.
At the same time, it is worth noting that the metadata can be detached from the image either intentionally or incidentally, and in cases where this happens, “its absence doesn’t mean an image is not from ChatGPT or our API,” the company reiterated.
OpenAI noted that the adoption of this method and other future methods for establishing provenance as well as urging users to be on the lookout for these signals are all steps geared towards boosting the integrity and trustworthiness of digital information.
The metadata change for mobile users will become effective as of February 12, 2024. In the meantime, the integration is limited to only images generated with OpenAI’s ChatGPT.
Growing Privacy Concerns in the AI Ecosystem
OpenAI took this bold step as concerns about AI privacy began to reach unprecedented levels. Many people are getting very troubled about the effect of AI in society and how unsafe it is for humans to be around LLM technology. It was recently discovered that certain models of AI have been exhibiting deceptive behavior that could be harmful to humans.
Apart from privacy concerns, some bad actors have consistently been utilizing AI to carry out their illicit activities. Brad Garlinghouse, Ripple CEO, warned his followers about a scheme where scammers cloned a video of him falsely urging XRP holders to send their coins for a promised doubling.
In the same fashion, several incriminating images and videos of pop singer Taylor Swift were circulated and seen multiple times on the internet with one of them viewed up to 47 million times. To avoid many such incidents, OpenAI seems to be putting effort into additional features and upgrades.
- Michael Saylor Teases New Bitcoin Buy As ‘Orange Dots’ Return
- December Recovery Ahead? Coinbase Outlines Why Crypto Market May Rebound
- Peter Brandt Hints at Further Downside for Bitcoin After Brief Rebound
- $1.3T BPCE To Roll Out Bitcoin, Ethereum and Solana Trading For Clients
- Why is the LUNC Price Up 70% Despite the Crypto Market’s Decline?
- Ethereum Price Holds $3,000 as Bitmine Scoops Up $199M in ETH; What Next?
- Solana Price Outlook Strengthens as Spot ETFs See $15.68M in Fresh Inflows
- Dogecoin Price Gears Up for a $0.20 Breakout as Inverse H&S Takes Shape
- Bitcoin Price Forecast as BlackRock Sends $125M in BTC to Coinbase — Is a Crash Inevitable?
- XRP Price Prediction As Spot ETF Inflows Near $1 Billion: What’s Next?
- Solana Price Outlook: Reversal at Key Support Could Lead to $150 Target





