Multi-perspective consistency checking for large language model hallucination detection: a black-box zero-resource approach

Linggang KONG , Xiaofeng ZHONG , Jie CHEN , Haoran FU , Yongjie WANG

Eng Inform Technol Electron Eng ›› 2025, Vol. 26 ›› Issue (11) : 2298 -2309.

PDF (437KB)
Eng Inform Technol Electron Eng ›› 2025, Vol. 26 ›› Issue (11) :2298 -2309. DOI: 10.1631/FITEE.2500180
Research Article

Multi-perspective consistency checking for large language model hallucination detection: a black-box zero-resource approach

Author information +
History +
PDF (437KB)

Abstract

Large language models (LLMs) have been applied across various domains due to their superior natural language processing and generation capabilities. Nonetheless, LLMs occasionally generate content that contradicts real-world facts, known as hallucinations, posing significant challenges for real-world applications. To enhance the reliability of LLMs, it is imperative to detect hallucinations within LLM generations. Approaches that retrieve external knowledge or inspect the internal states of the model are frequently used to detect hallucinations; however, this requires either white-box access to the LLM or reliable expert knowledge resources, raising a high barrier for end-users. To address these challenges, we propose a black-box zero-resource approach for detecting LLM hallucinations, which primarily leverages multi-perspective consistency checking. The proposed approach mitigates the LLM overconfidence phenomenon by integrating multi-perspective consistency scores from both queries and responses. In comparison to the single-perspective detection approach, our proposed approach demonstrates superior performance in detecting hallucinations across multiple datasets and LLMs. Notably, in one experiment, where the hallucination rate reaches 94.7%, our approach improves the balanced accuracy (B-ACC) by 2.3 percentage points compared with the single consistency approach and achieves an area under the curve (AUC) of 0.832, all without depending on any external resources.

Keywords

Large language models (LLMs) / LLM hallucination detection / Consistency checking / LLM security

Cite this article

Download citation ▾
Linggang KONG, Xiaofeng ZHONG, Jie CHEN, Haoran FU, Yongjie WANG. Multi-perspective consistency checking for large language model hallucination detection: a black-box zero-resource approach. Eng Inform Technol Electron Eng, 2025, 26(11): 2298-2309 DOI:10.1631/FITEE.2500180

登录浏览全文

4963

注册一个新账户 忘记密码

References

RIGHTS & PERMISSIONS

Zhejiang University Press

AI Summary AI Mindmap
PDF (437KB)

122

Accesses

0

Citation

Detail

Sections
Recommended

AI思维导图

/