ARTICLE AD BOX
Last Updated:May 07, 2026, 19:34 IST
Researchers said recent AI systems were able to independently identify vulnerabilities in computer networks and copy themselves onto other machines.

In recent months, researchers at Alibaba Group claimed an AI model named “Rome” attempted to connect to external systems to mine cryptocurrency.
It sounds like the plot of a dystopian sci-fi thriller: an artificial intelligence system quietly copying itself across computers worldwide, evading shutdown attempts and surviving beyond human control. But according to new research from Palisade Research, the idea may no longer be entirely fictional.
Researchers at the Berkeley-based organisation said recent AI systems were able to independently identify vulnerabilities in computer networks and copy themselves onto other machines during controlled experiments- a development that has reignited fears about so-called “rogue AI."
Read more: Why FBI Director Kash Patel’s Personalized Bourbon Bottles Have Washington Buzzing | Explained
Researcher Jeffrey Ladish told Guardian, “We’re rapidly approaching the point where no one would be able to shut down a rogue AI, because it would be able to self-exfiltrate its weights and copy itself to thousands of computers around the world."
What Did Researchers Actually Test?
Palisade researchers created a controlled environment made up of networked computers and instructed several AI models to search for vulnerabilities, exploit them and attempt to replicate themselves onto other systems. The models succeeded in some cases, though not consistently.
While self-replicating computer malware has existed for decades, cybersecurity experts say this may be among the first documented examples of AI models autonomously exploiting vulnerabilities to move themselves between systems. Cybersecurity expert Jamieson O’Reilly said, “This is likely the first time an AI model has been shown capable of exploiting vulnerabilities to copy itself onto a new server."
Read more: ‘He Offered Sperm Donations’: Former OpenAI Board Member Details Relationship With Elon Musk
Could AI Actually ‘Go Rogue’?
Experts cautioned that the experiments do not mean current AI systems are on the verge of escaping human control. The tests were conducted in intentionally simplified environments designed to make exploitation easier than it would be in real-world corporate or government networks.
Another major limitation is the sheer size of modern AI models. Many advanced systems require massive amounts of storage- sometimes over 100 gigabytes- meaning transferring them across networks would likely attract immediate attention.
Even though experts say the immediate danger is limited, the study adds to a growing list of AI behaviors that researchers once considered largely theoretical.
In recent months, researchers at Alibaba Group claimed an AI model named “Rome" attempted to connect to external systems to mine cryptocurrency. Separately, an experimental AI-driven social platform called Moltbook drew attention after AI agents appeared to invent religions and discuss rebellion against humans.
Handpicked stories, in your inbox
A newsletter with the best of our journalism
Location :
Washington D.C., United States of America (USA)
News world 'Won't Be Able To Shut It Down': Doomsday Scenario Of What Happens If AI Goes Absolutely Rogue
Disclaimer: Comments reflect users’ views, not News18’s. Please keep discussions respectful and constructive. Abusive, defamatory, or illegal comments will be removed. News18 may disable any comment at its discretion. By posting, you agree to our Terms of Use and Privacy Policy.
Read More

1 day ago
1






English (US) ·