AI News: Dark Side Of AI Uncovered In Israel-Hamas War
Highlights
- Artificial Intelligence featured in Israel-Hamas war
- The AI system was dubbed Lavendar
- This further raises concerns over AI safety
It appears that the Israeli Military employed the use of an Artificial Intelligence (AI) system known as Lavender to take down a massive number of Hamas targets.
Israel-Hamas War: Lavendar AI Impact
According to a report from The Guardian, Israeli intelligence sources revealed that the military bombing campaign that happened in Gaza used a previously undisclosed AI-powered database. This database identified 37,000 potential targets that were found to be linked to Hamas.
In addition to this revelation, sources claimed that Israeli military officials permitted a large number of Palestinian civilians to be killed, especially in the early weeks and months of the conflict.
The revelation has served as an eye-opener to Israel’s leveraged machine learning to identify and attack its targets during the battle. It also shows a relationship between AI and advanced warfare and possibly terrorism.
The use of an AI system in the Israel-Hamas war raises a lot of questions that are majorly centered around legal and moral stances.
AI Safety Raising Concern Globally
It is well known that one of the advantages of the use of AI tools is to make it easier to achieve significant results in a short time. Even in the case of Israel’s use of an AI-enabled platform, an intelligence officer acknowledged that using Lavender saved them a lot of time.
“I would invest 20 seconds for each target at this stage, and do dozens of them every day. I had zero added-value as a human, apart from being a stamp of approval. It saved a lot of time,” he said.
In light of this AI safety has become a source of concern for several entities especially as it affects mankind. The fear is largely about how bad actors will use AI to carry out their illicit activities and the security implications of having the automation of this much information. Also, new research into the AI ecosystem found most of its models unsafe for humans.
These concerns led the United States and the UK to collaboratively put up systems that drive safety measures for the AI ecosystem. This move was triggered by the commitments made at an AI safety summit that was held in Bletchley Park in November.
US Commerce Secretary Gina Raimondo noted that the alliance is a way to “address the risks of our national security concerns and the concerns of our broader society.”
- Nasdaq-Listed Bonk Holdings Makes First Major Purchase of $32M, Nears 3% of Total Supply
- Binance-based Meme Coin GIGGLE Fund Shoots 145% on Exchange Listing News
- ASTER Gets Major Boost as Project Launches Token Buyback Program, Expert Predicts $10 Spike
- Trump Picks SEC Crypto Counsel Michael Selig to Lead CFTC Amid Crypto Oversight Push
- First Spot XRP ETF Hits Milestone as CME Flags Institutional Interest
- HBAR Price Targets 50% Jump as Hedera Unleashes Massive Staking Move
- Chainlink Price Outlook: Analyst Predicts $100 as Reserve Adds 63K LINK
- SUI Price Prediction as TVL and Monthly DEX Volume Hit All-Time Highs- What’s Next?
- PUMP Price Prediction as Whales Accumulate 4.2B Tokens- Is 135% Rally Next?
- Dogecoin Price Crash Looms as Flag, Death Cross, Falling DOGE ETF Inflows Coincide
- Solana Price Prediction as Osprey’s S-1 Filing and Hong Kong’s ETF Launch Fuel Reversal Hopes- Is $250 Next?