Architectural constraints in today’s most popular artificial intelligence (AI) tools may limit how much more intelligent they can get, new research suggests.

A study published Feb. 5 on the preprint arXiv server argues that modern large language models (LLMs) are inherently prone to breakdowns in their problem-solving logic, known as “reasoning failures.”

Share.