Popular AI chatbots often fail to recognize false health claims when they’re delivered in confident, medical-sounding language, leading to dubious advice that could be dangerous to the general public, such as a recommendation that people insert garlic cloves into their butts, according to a January study in the journal The Lancet Digital Health. Another study, published in February in the journal Nature Medicine, found that chatbots were no better than an ordinary internet search.

The results add to a growing body of evidence suggesting that such chatbots are not reliable sources of health information, at least for the general public, experts told Live Science.

Share.