This AI malware worm can steal private data and send spam emails without you ever having to click

This AI worm can steal private data and send spam emails
This AI worm can steal private data and send spam emails Copyright Canva
Copyright Canva
By Pascale Davies
Share this articleComments
Share this articleClose Button

Researchers have created an artificial intelligence (AI) worm that can infiltrate your emails and access data without you ever needing to click.

ADVERTISEMENT

Security researchers have created an artificial intelligence (AI) worm that can infiltrate models such as ChatGPT and Gemini, spread malware and potentially steal data.

Based in the United States and Israel, the researchers purposely built the computer worm to serve as a whistleblower to prevent their occurrence in generative AI (GenAI) models.

A computer worm can replicate itself and spread by compromising other machines. Unlike other worms, this one does not need you to click or open an email to receive the malware, it does so automatically the moment you receive the infected email.

They called the worm Morris II after the first ever computer worm developed in 1988.

How does it work?

The researchers created an email system that could reply to messages using GenAI and then plug into models ChatGPT, Gemini, and LLaVA.

Morris II was demonstrated against GenAI-powered email assistants and could steal personal data and launch spamming campaigns.

"The study demonstrates that attackers can insert such prompts into inputs that, when processed by GenAI models, prompt the model to replicate the input as output (replication) and engage in malicious activities (payload)," the researchers wrote.

The prompt, which is any question or communication asked of the AI model, forces the AI model to respond with another prompt, which can infect the AI assistant to then draw out sensitive information.

The worm can then be sent to other contacts in your online network, "exploiting the connectivity within the GenAI ecosystem," they added.

The researchers warn that while no AI worms have been spotted, it is only a matter of time.

"It basically means that now you have the ability to conduct or to perform a new kind of cyberattack that hasn't been seen before," Ben Nassi, a Cornell University researcher behind the computer worm, told the publication Wired in an interview.

The study makes a worrying case as AI assistants make it into smart devices and even cars and can send emails or book appointments on someone’s behalf.

In another paper released last month, researchers from Singapore and China have shown they could easily gain root access to a large language model’s (LLM) operating system.

Share this articleComments

You might also like