Swarms of artificial intelligence (AI) agents could soon invade social media platforms en masse to spread false narratives, harass users and undermine democracy, researchers warn.
These “AI swarms” will form part of a new frontier in information warfare, capable of mimicking human behavior to avoid detection while creating the illusion of an authentic online movement, based on a commentary published Jan. 22 in the journal Science.
“Humans, generally speaking, are conformist,” commentary co-author Jonas Kunst, a professor of communication at the BI Norwegian Business School in Norway, told Live Science. “We often don’t want to agree with that, and people vary to a certain extent, but all things being equal, we do have a tendency to believe what most people do has certain value. That’s something that can relatively easily be hijacked by these swarms.”
And if you don’t get swept up with the herd, the swarm could also be a harassment tool to discourage arguments that undermine the AI’s narrative, the researchers argued. For example, the swarm could emulate an angry mob to target an individual with dissenting views and drive them off the platform.
The researchers don’t give a timeline for the invasion of AI swarms, so it’s unclear when the first agents will arrive on our feeds. However, they noted that swarms would be difficult to detect, and thus the extent to which they might have already been deployed is unknown. For many, signs of the growing influence of bots on social media are already evident, while the dead internet conspiracy theory — that bots are responsible for the majority of online activity and content creation — has been gaining traction over the last few years.
Shepherding the flock
The researchers warn that the emerging AI swarm risk is compounded by long-standing vulnerabilities in our digital ecosystems, already weakened by what they described as the “erosion of rational-critical discourse and a lack of shared reality among citizens.”
Anyone who uses social media will know that it’s become a very divisive place. The online ecosystem is also already littered with automated bots — non-human accounts following the commands of computer software that comprise more than half of all web traffic. Conventional bots are typically only capable of performing simple tasks over and over again, like posting the same incendiary message. They can still cause harm, spreading false information and inflating false narratives, but they’re usually pretty easy to detect and rely on humans to be coordinated at scale.
The next-generation AI swarms, on the other hand, are coordinated by large language models (LLMs) — the AI systems behind popular chatbots. With an LLM at the helm, a swarm will be sophisticated enough to adapt to the online communities it infiltrates, installing collections of different personas that retain memory and identity, according to the commentary.
“We talk about it as a kind of organism that is self-sufficient, that can coordinate itself, can learn, can adapt over time and, by that, specialize in exploiting human vulnerabilities,” Kunst said.
This mass manipulation is far from hypothetical. Last year, Reddit threatened legal action against researchers who used AI chatbots in an experiment to manipulate the opinions of four million users in its popular forum r/changemyview. According to the researchers’ preliminary findings, their chatbots’ responses were between three to six times more persuasive than those made by human users.
A swarm could contain hundreds, thousands — or even a million — AI agents. Kunst noted that the number scales with computing power and would also be limited by restrictions that social media companies may introduce to combat the swarms.
But it’s not all about the number of agents. Swarms could target local community groups that would be suspicious of a sudden influx of new users. In this scenario, only a few agents would be deployed. The researchers also noted that because the swarms are more sophisticated than traditional bots, they can have more influence with fewer numbers.
“I think the more sophisticated these bots are, the less you actually need,” commentary lead author Daniel Schroeder, a researcher at the technology research organization SINTEF in Norway, told Live Science.
Guarding against next-gen bots
Agents boast an edge in debates with real users because they can post 24 hours a day, every day, for however long it takes for their narrative to take hold. The researchers added that in “cognitive warfare,” AI’s relentlessness and persistence can be weaponized against limited human efforts.
Social media companies want real users on their platforms, not AI agents, so the researchers envisage that companies will respond to AI swarms with improved account authentication — forcing users to prove they are real people. But the researchers also flagged some issues with this approach, arguing that it could discourage political dissent in countries where people rely on anonymity to speak out against governments. Authentic accounts can also be hijacked or acquired, which complicates things further. Still, the researchers noted that strengthening authentication would make it more difficult and costly for those wishing to deploy AI swarms.
The researchers also proposed other swarm defenses, like scanning live traffic for statistically anomalous patterns that could represent AI swarms and the establishment of an “AI Influence Observatory” ecosystem, in which academic groups, NGOs and other institutions can study, raise awareness and respond to the AI swarm threat. In essence, the researchers want to get ahead of the issue before it can disrupt elections and other large events.
“We are with a reasonable certainty warning about a future development that really might have disproportionate consequences for democracy, and we need to start preparing for that,” Kunst said. “We need to be proactive instead of waiting for the first type of larger events being negatively influenced by AI swarms.”


