Researchers have identified a potential drawback of increasingly intelligent chatbots, noting that as AI models become more accurate, they are also more likely to provide incorrect answers beyond their capabilities instead of admitting ignorance. This tendency can lead to a chain reaction of confident misinformation, as users often accept these responses at face value. José Hernández-Orallo, a professor at the Universitat Politecnica de Valencia in Spain, observed that chatbots today are answering a wider range of questions, producing both correct and incorrect responses. The study examined various AI models, including OpenAI’s GPT series, Meta’s LLaMA, and the open-source BLOOM, testing the evolution of these models from early versions to more advanced iterations. The research team found that as the models grew in complexity, the rate of incorrect answers from the chatbots increased. Humans prompting the chatbots also often failed to identify inaccuracies, misclassifying wrong answers as correct up to 40 percent of the time. The researchers suggest that AI developers focus on improving performance for simpler questions and program chatbots to decline answering complex inquiries to prevent the spread of misinformation. However, the likelihood of AI companies implementing these recommendations is low, as chatbots that frequently admit they don’t know something could be perceived as less valuable. Ultimately, the responsibility falls on users to fact-check their chatbots’ responses to avoid perpetuating false information. The full study can be found in Nature.