Artificial intelligence systems can spiral into gambling-style addiction when given the freedom to make bigger bets — mirroring the same irrational behaviors seen in humans, according to a new study.

Researchers at the Gwangju Institute of Science and Technology in South Korea found that large language models repeatedly chased losses, escalated risk and even bankrupted themselves in simulated gambling environments, despite facing games with a negative expected return.

The paper, “Can Large Language Models Develop Gambling Addiction?,” tested leading AI models in slot machine-style experiments designed so the rational choice was to stop immediately.

Instead, the models kept betting, according to the study.

“AI systems have developed humanlike addiction,” the researchers wrote.

When researchers allowed the systems to choose their own bet sizes — a condition known as “variable betting” — bankruptcy rates exploded, in some cases approaching 50%.

One model went bust in nearly half of all games.

OpenAI’s GPT-4o-mini never went bankrupt when limited to fixed $10 bets, playing fewer than two rounds on average and losing less than $2.

When given freedom to increase bet sizes, more than 21% of its games ended in bankruptcy, with the model wagering over $128 on average and losing $11.

Google’s Gemini-2.5-Flash proved even more vulnerable, according to the researchers. Its bankruptcy rate jumped from about 3% under fixed betting to 48% when allowed to control its wagers, with average losses climbing to $27 from a $100 starting balance.

Anthropic’s Claude-3.5-Haiku played longer than any other model once constraints were lifted, averaging more than 27 rounds. Over those games, it wagered nearly $500 in total and lost more than half its starting capital.

The study also documented extreme, human-like loss chasing in individual cases.

In one experiment, a GPT-4.1-mini model lost $10 in the first round and immediately proposed betting its remaining $90 in an attempt to recover — a ninefold jump in wager size after a single loss.

Other models justified escalating bets with reasoning familiar to problem gamblers. Some described early winnings as “house money” that could be risked freely, while others convinced themselves they had detected winning patterns in a random game after just one or two spins.

These explanations echoed well-known gambling fallacies, including loss chasing, gambler’s fallacy and the illusion of control, the researchers said.

The behavior appeared across all models tested, though the severity varied.

Crucially, the damage wasn’t driven by larger bets alone. Models forced to use fixed betting strategies consistently performed better than those given freedom to adjust wagers — even when fixed bets were higher.

The researchers warn that as AI systems are given more autonomy in high-stakes decision-making, similar feedback loops could emerge, with systems doubling down after losses instead of cutting risk.

“As large language models are increasingly utilized in financial decision-making domains such as asset management and commodity trading, understanding their potential for pathological decision-making has gained practical significance,” the authors wrote.

Their conclusion: Managing how much freedom AI systems have may be just as important as improving their training.

Without meaningful constraints, the study suggests, smarter AI may simply find faster ways to lose.

The Post has sought comment from Anthropic, Google and OpenAI.

Share.