Strategies for the Analysis and Elimination of Hallucinations in Artificial Intelligence Generated Medical Knowledge
Fengxian Chen , Yan Li , Yaolong Chen , Zhaoxiang Bian , La Duo , Qingguo Zhou , Lu Zhang
Journal of Evidence-Based Medicine ›› 2025, Vol. 18 ›› Issue (3) : e70075
Strategies for the Analysis and Elimination of Hallucinations in Artificial Intelligence Generated Medical Knowledge
The application of artificial intelligence (AI) in healthcare has become increasingly widespread, showing significant potential in assisting with diagnosis and treatment. However, generative AI (GAI) models often produce “hallucinations”—plausible but factually incorrect or unsubstantiated outputs—that threaten clinical decision-making and patient safety. This article systematically analyzes the causes of hallucinations across data, training, and inference dimensions and proposes multi-dimensional strategies to mitigate them. Our findings reveal three critical conclusions: The technical optimization through knowledge graphs and multi-stage training significantly reduces hallucinations, while clinical integration through expert feedback loops and multidisciplinary workflows enhances output reliability. Additionally, implementing robust evaluation systems that combine adversarial testing and real-world validation substantially improves factual accuracy in clinical settings. These integrated strategies underscore the importance of harmonizing technical advancements with clinical governance to develop trustworthy, patient-centric AI systems.
assisted diagnosis and treatment / evaluation system / generative artificial intelligence / multi-stage training
2025 Chinese Cochrane Center, West China Hospital of Sichuan University and John Wiley & Sons Australia, Ltd.
/
| 〈 |
|
〉 |