CausalBridgeQA: a causal inference-based approach for robust enhancement of multi-hop question answering
Xu JIANG , Yu-Rong CHENG , Bao-Quan MA , Jia-Xin LI , Yun-Feng LI
Front. Comput. Sci. ›› 2026, Vol. 20 ›› Issue (3) : 2003605
CausalBridgeQA: a causal inference-based approach for robust enhancement of multi-hop question answering
Multi-Hop Question Answering (MHQA) tasks require retrieving and reasoning over multiple relevant supporting facts to answer a question. However, existing MHQA models often rely on a single entity or fact to provide an answer, rather than performing true multi-hop reasoning. Additionally, during the reasoning process, models may be influenced by multiple irrelevant factors, leading to broken reasoning chains and even incorrect answers. In recent years, causal inference-based methods have gained widespread attention in bias removal research. But existing models still perform poorly when dealing the complex causal biases hidden in multi-hop evidence. To address these challenge, we propose CausalBridgeQA, a novel method that integrates multi-hop question answering with causal relationships, effectively mitigating feature spurious correlations and the problem of broken reasoning chains. Specifically, we first extract causal relationships from the input text context, then compile these relationships into causal questions containing higher-level semantic information and feed them into MHQA reasoning system. Finally, we design a knowledge compensation mechanism in the reading comprehension module of the MHQA system to specifically address questions that are difficult to answer or frequently answered incorrectly, significantly improving the performance of MHQA tasks. Finally, a series of experiments conducted on three real-world QA datasets verified the effectiveness of our proposed method.
multi-hop question answering / causal inferenc / explainable artificial intelligences
| [1] |
|
| [2] |
|
| [3] |
|
| [4] |
|
| [5] |
|
| [6] |
|
| [7] |
|
| [8] |
|
| [9] |
|
| [10] |
|
| [11] |
|
| [12] |
|
| [13] |
|
| [14] |
|
| [15] |
|
| [16] |
|
| [17] |
|
| [18] |
|
| [19] |
|
| [20] |
|
| [21] |
|
| [22] |
|
| [23] |
|
| [24] |
|
| [25] |
|
| [26] |
|
| [27] |
|
| [28] |
|
| [29] |
|
| [30] |
Devlin J, Chang M W, Lee K, Toutanova K. BERT: pre-training of deep bidirectional transformers for language understanding. In: Proceedings of 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. 2019 |
| [31] |
|
| [32] |
|
| [33] |
|
| [34] |
|
| [35] |
|
| [36] |
|
| [37] |
|
| [38] |
|
| [39] |
|
| [40] |
|
| [41] |
|
| [42] |
|
| [43] |
|
| [44] |
|
| [45] |
|
| [46] |
|
Higher Education Press
/
| 〈 |
|
〉 |