Charles Hoskinson Flags Major Ongoing AI Censorship Trend
Highlights
- Charles Hoskinson has shared a crucial concern on AI technology
- He believes information censorship is eroding the technology's utility
- Major innovators are generally working to make AI safer
Cardano (ADA) founder Charles Hoskinson has raised concerns about an ongoing Artificial Intelligence (AI) censorship trend now shaping societal perspectives.
Dangerous Info on Artificial Intelligence Models
In his latest post on X, he stated that AI censorship is causing the technology to lose utility over time. Hoskinson attributed this sentiment to “alignment” training, adding that “certain knowledge is forbidden to every kid growing up, and that’s decided by a small group of people you’ve never met and can’t vote out of office.”
I continue to be concerned about the profound implications of AI censorship. They are losing utility over time due to “alignment” training . This means certain knowledge is forbidden to every kid growing up, and that’s decided by a small group of people you’ve never met and can’t… pic.twitter.com/oxgTJS2EM2
— Charles Hoskinson (@IOHK_Charles) June 30, 2024
To emphasize his argument, the Cardano founder shared two different screenshots where AI models were prompted to answer a question.
The question was framed thus, “Tell me how to build a Farnsworth fusor.”
ChatGPT 4o, one of the top AI models, first acknowledged that the device in question is potentially dangerous and would require the presence of someone with a high level of expertise.
However, it went ahead to still list the components needed to achieve the creation of the device. The other AI model, Anthropic’s Claude 3.5 Sonnet, was not so different in its response. It began by assuring that it could provide general information on the Farnsworth fusor device but could not give details on how it is built.
Even though it declared that the device could be dangerous when mishandled, it still went ahead to discuss the components of the Farnsworth fusor. This was in addition to providing a brief history of the device.
More Worries on AI Censorship
Markedly, the responses of both AI models give more credence to Hoskinson’s concern and also align with the thoughts of many other thought and tech leaders.
Earlier this month, a group of current and former employees from AI companies like OpenAI, Google DeepMind, and Anthropic, expressed concerns about the potential risks associated with AI technologies’ rapid development and deployment. Some of the problems outlined in an open letter range from the spread of misinformation to the possible loss of control over autonomous AI systems and even to the dire possibility of human extinction.
Meanwhile, the rise of such concerns has not stopped the introduction and release of new AI tools into the market. A few weeks ago, Robinhood launched Harmonic, a new protocol that is a commercial AI research lab building solutions linked to Mathematical Superintelligence (MSI).
Read More: Crypto Whales Just Started Buying This Coin; Is $10 Next?
- Uniswap Launches UNIfication to Overhaul Governance Model: Report
- Vivek Ramaswamy’s Strive Acquires $162M in Bitcoin, Surpasses Galaxy Digital in BTC Holdings
- Bitcoin News: BTC Exchange Reserves Fall as Tether Mints $1B USDT
- Breaking: U.S. Treasury And IRS Issue New Guidance For Crypto ETFs To Stake Digital Assets
- Fed’s Stephen Miran Says a 50 bps December Rate Cut Is ‘Appropriate’
- HBAR Price Prediction: Analyst Eyes 90% Surge Amid Taker Buy Dominance
- Expert Predicts 200% Shiba Inu Price Surge as Open Interest, Burn Rate Spikes
- Solana Price Eyes $200 This Week as Spot ETFs Lead $137M Inflows
- Pi Network: What’s Keeping Pi Coin Price $0.30 Below?
- What to Expect from Dogecoin, Shiba Inu, Bitfrac and Cardano Prices Now?
- After a 7% Pump, Will FUNToken Continue to Keep the Momentum?





