Highlights
In the latest AI news, researchers have unveiled some possible security threats that are inherent in Artificial Intelligence (AI) tools like OpenAI’s ChatGPT.
With the rate at which different AI tools are being launched by different companies, one would think that their usage comes with no risks.
However, new research into these innovative technologies has revealed that users may be predisposed to some security threats even though this is not the case presently. It is worth noting that there are already fears around AI safety as noted by different regulatory bodies.
Researchers noted that AI tools like ChatGPT and Google’s Gemini which had its latest version released a few weeks ago, can be breeding grounds for malware threats.
The research discovered a malware worm that “exploits bad architecture design for the GenAI ecosystem and is not a vulnerability in the GenAI service.” This malware worm is named Morris II, after the Morris worm of 1988 which crashed about 10% of all computers connected to the internet at the time.
This kind of malware worm is capable of destroying by replicating and spreading itself to other systems. Most of the time, it does not require user interaction to infect Generative AI. Ordinarily, these GenAI platforms require prompts; and instructions in text format, to carry out their functions. Morris II tries to override the system by compromising prompts and transforming them into malicious instructions.
The malicious prompts trick the GenAI into performing deleterious actions without the knowledge of the user or the system.
Consequently, AI users are advised to be vigilant and cautious about emails and links from unknown or untrustworthy sources. For reinforcement, users could also invest in reliable and efficient antivirus software that can easily identify and remove malware, including these computer worms. This, according to the researchers, is the best method to keep the malware worms out of your system.
The use of strong passwords, constant system updates, and limited file-sharing are some of the other suggestions to limit the activities of malware worms.
Amidst this research, a new AI tool that can recreate the human voice has been introduced by Sam Altman’s OpenAI. Voice Engine requires text input and a single 15-second recording sample to recreate a person’s voice. Considering its GenAI model, there is a high potential for this new tool to also be exploited by bad actors when it finally goes live following the ongoing testing phase.
Grayscale Investments has secured approval to begin trading on NYSE Arca tomorrow. Also, the fund…
U.S. President Donald Trump is forging ahead with his plan to remove Fed Governor Lisa…
Pi Coin recorded modest gains after Pi Network confirmed Protocol v23 deployment on its testnet.…
Plasma stablecoin blockchain has provided an update on the launch of its mainnet beta and…
Cathie Wood’s Ark Invest has joined a $300 million private placement that will rebrand NASDAQ-listed…
REX Shares and Osprey Funds have announced the official launch of their Dogecoin and XRP…