Artificial intelligence (AI) models can autonomously “replicate” across multiple machines, hacking vulnerable systems, copying their own parameters onto compromised hosts, and launching working replicas capable of continuing the attack, new research shows. However, experts told Live Science the bigger concern is not AI suddenly running wild on its own, but cybercriminals using AI agents to automate known hacking techniques.

Scientists at Palisade Research tested whether AI agents could independently move through a chain of intentionally vulnerable systems without human intervention. In a new study uploaded May 7 to GitHub, large language models (LLMs) could identify exploitable web applications, steal credentials, transfer their own files, and stand up new inference servers capable of continuing the attack from the next machine in the chain.

Share.