Graph reasoning over explicit semantic relation

Tianyou Zhu , Shi Liu , Bo Li , Junjian Liu , Pufan Liu , Fei Zheng

High-Confidence Computing ›› 2024, Vol. 4 ›› Issue (2) : 100190

PDF (546KB)
High-Confidence Computing ›› 2024, Vol. 4 ›› Issue (2) : 100190 DOI: 10.1016/j.hcc.2023.100190
Research Articles
research-article

Graph reasoning over explicit semantic relation

Author information +
History +
PDF (546KB)

Abstract

Multi-hop reasoning over language or graphs represents a significant challenge in contemporary research, particularly with the reliance on deep neural networks. These networks are integral to text reasoning processes, yet they present challenges in extracting and representing domain or commonsense knowledge, and they often lack robust logical reasoning capabilities. To address these issues, we introduce an innovative text reasoning framework. This framework is grounded in the use of a semantic relation graph and a graph neural network, designed to enhance the model’s ability to encapsulate knowledge and facilitate complex multi-hop reasoning.===Our framework operates by extracting knowledge from a broad range of texts. It constructs a semantic relationship graph based on the logical relationships inherent in the reasoning process. Beginning with the core question, the framework methodically deduces key knowledge, using it as a guide to iteratively establish a complete evidence chain, thereby determining the final answer. Leveraging the advanced reasoning capabilities of the graph neural network, this approach is adept at multi-hop logical reasoning. It demonstrates strong performance in tasks like machine reading comprehension and question answering, while also clearly delineating the path of logical reasoning.

Keywords

Semantic relation graph / Multi-hop reasoning / Graph neural network

Cite this article

Download citation ▾
Tianyou Zhu, Shi Liu, Bo Li, Junjian Liu, Pufan Liu, Fei Zheng. Graph reasoning over explicit semantic relation. High-Confidence Computing, 2024, 4(2): 100190 DOI:10.1016/j.hcc.2023.100190

登录浏览全文

4963

注册一个新账户 忘记密码

Declaration of competing interest

The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.

Acknowledgment

This work was supported by the Science and Technology Program of Big Data Center, State Grid Corporation of China (SGSJ0000YFJS2200094).

References

[1]

C. Qian, C. Han, Y.R. Fung, Y. Qin, Z. Liu, H. Ji, CREATOR: Disentangling abstract and concrete reasonings of large language models through tool creation, 2023, arXiv:2305.14318.

[2]

X. Lv, Y. Cao, L. Hou, J. Li, Z. Liu, Y. Zhang, Z. Dai, Is multi-hop reasoning really explainable? Towards benchmarking reasoning interpretability, CoRR (2021) arXiv:2104.06751, arXiv:2104. 06751, URL https://arxiv.org/abs/2104.06751.

[3]

H. Qiu, Y. Zhang, Y. Li, Q. Yao, Logical expressiveness of graph neural network for knowledge graph reasoning, 2023, arXiv:2303.12306.

[4]

J. Wei, X. Wang, D. Schuurmans, M. Bosma, E.H. Chi, Q. Le, D. Zhou, Chain of thought prompting elicits reasoning in large language models, CoRR (2022) arXiv:2201.11903, arXiv:2201. 11903, URL https://arxiv.org/abs/2201.11903.

[5]

D. Sachan, M. Patwary, M. Shoeybi, N. Kant, W. Ping, W.L. Hamilton, B. Catanzaro, End-to-End Training of Neural Retrievers for Open-Domain Question Answering, Association for Computational Linguistics, 2021, pp. 6648-6662, http://dx.doi.org/10.18653/v1/2021.acl-long.519.

[6]

A. Saxena, S. Chakrabarti, P. Talukdar, Question Answering Over Temporal Knowledge Graphs, Association for Computational Linguistics, 2021, pp. 6663-6676, http://dx.doi.org/10.18653/v1/2021.acl-long.520.

[7]

C. Wang, P. Liu, Y. Zhang, Can Generative Pre-trained Language Models Serve As Knowledge Bases for Closed-book QA?, Association for Computational Linguistics, 2021, pp. 3241-3251, http://dx.doi.org/10.18653/v1/2021.acl-long.251.

[8]

K. Ma, H. Cheng, X. Liu, E. Nyberg, J. Gao, Open Domain Question Answering with A Unified Knowledge Interface, Association for Computational Linguistics, 2022, pp. 1605-1620, http://dx.doi.org/10.18653/v1/2022.acllong.113.

[9]

Y. Niu, F. Huang, J. Liang, W. Chen, X. Zhu, M. Huang, A Semantic-based Method for Unsupervised Commonsense Question Answering, Association for Computational Linguistics, 2021, pp. 3037-3049, http://dx.doi.org/10.18653/v1/2021.acl-long.237.

[10]

H. Cheng, Y. Shen, X. Liu, P. He, W. Chen, J. Gao, UnitedQA: A Hybrid Approach for Open Domain Question Answering, Association for Computational Linguistics, 2021, pp. 3080-3090, http://dx.doi.org/10.18653/v1/2021.acl-long.240.

[11]

N. Yang, F. Wei, B. Jiao, D. Jiang, L. Yang, xMoCo: Cross Momentum Contrastive Learning for Open-Domain Question Answering, Association for Computational Linguistics, 2021, pp. 6120-6129, http://dx.doi.org/10.18653/v1/2021.acl-long.477.

[12]

Y. Mao, P. He, X. Liu, Y. Shen, J. Gao, J. Han, W. Chen, Generation-Augmented Retrieval for Open-Domain Question Answering, Association for Computational Linguistics, 2021, pp. 4089-4100, http://dx.doi.org/10.18653/v1/2021.acl-long.316.

[13]

Y. Cao, M. Fang, D. Tao, BAG: bi-directional attention entity graph convolutional network for multi-hop reasoning question answering, in: NAACL-HLT, 2019, pp. 357-362.

[14]

Y. Fang, S. Sun, Z. Gan, R. Pillai, S. Wang, J. Liu, Hierarchical graph network for multi-hop question answering, CoRR (2019) arXiv:1911.03631.

[15]

N.D. Cao, W. Aziz, I. Titov, Question answering by reasoning across documents with graph convolutional networks, in: NAACL, 2019, pp. 2306-2317.

[16]

Y. Xiao, Y. Qu, L. Qiu, H. Zhou, L. Li, W. Zhang, Y. Yu, Dynamically fused graph network for multi-hop reasoning, 2019, cite arxiv:1905. 06933Comment: Accepted by ACL 19.

[17]

B.Y. Lin, X. Chen, J. Chen, X. Ren, KagNet:Knowledge-aware graph networks for commonsense reasoning., in: Proc. of EMNLP-IJCNLP, 2019.

[18]

S. Lv, D. Guo, J. Xu, D. Tang, N. Duan, M. Gong, L. Shou, D. Jiang, G. Cao, S. Hu, Graph-based reasoning over heterogeneous external knowledge for commonsense question answering, CoRR (2019) arXiv:1909.05311.

[19]

Y.-J. Heo, E.-S. Kim, W.S. Choi, B.-T. Zhang, Hypergraph Transformer: Weakly-Supervised Multi-hop Reasoning for Knowledge-based Visual Question Answering, Association for Computational Linguistics, 2022, pp. 373-390, http://dx.doi.org/10.18653/v1/2022.acl-long.29.

[20]

A. Asai, K. Hashimoto, H. Hajishirzi, R. Socher, C. Xiong, Learning to retrieve reasoning paths over wikipedia graph for question answering, in: ICLR, 2020.

[21]

P. Qi, X. Lin, L. Mehr, Z. Wang, C.D. Manning, Answering complex opendomain questions through iterative query generation, in: EMNLP-IJCNLP, 2019, pp. 2590-2602.

[22]

K. Lee, M. Chang, K. Toutanova, Latent retrieval for weakly supervised open domain question answering, in: ACL, 2019, pp. 6086-6096.

[23]

A.W. Yu, D. Dohan, M. Luong, R. Zhao, K. Chen, M. Norouzi, Q.V. Le, Qanet: Combining local convolution with global self-attention for reading comprehension, in: ICLR, 2018.

[24]

M. Ding, C. Zhou, Q. Chen, H. Yang, J. Tang, Cognitive graph for multi-hop reading comprehension at scale, in: ACL 2019, 2019, pp. 2694-2703.

[25]

B. Dhingra, Q. Jin, Z. Yang, W. Cohen, R. Salakhutdinov, Neural models for reasoning over multiple mentions using coreference, in: NAACL, 2018, pp. 42-48.

[26]

D. Chen, A. Fisch, J. Weston, A. Bordes, Reading wikipedia to answer open-domain questions, in: ACL, 2017, pp. 1870-1879.

[27]

R. Das, S. Dhuliawala, M. Zaheer, A. McCallum, Multi-step retriever-reader interaction for scalable open-domain question answering, in: ICLR, 2019.

[28]

M.J. Seo, A. Kembhavi, A. Farhadi, H. Hajishirzi, Bidirectional attention flow for machine comprehension, in: ICLR, 2017.

[29]

S. Jiang, Q. Chen, X. Liu, B. Hu, L. Zhang, Multi-hop Graph Convolutional Network with High-order Chebyshev Approximation for Text Reasoning, Association for Computational Linguistics, 2021, pp. 6563-6573, http://dx.doi.org/10.18653/v1/2021.acl-long.513.

[30]

O. Vinyals, M. Fortunato, N. Jaitly, Pointer networks, in: C. Cortes, N.D.Lawrence, D.D.Lee, M.Sugiyama, R.Garnett (Eds.), Advances in Neural Information Processing Systems 28, Curran Associates, Inc., 2015, pp. 2692-2700, URL http://papers.nips.cc/paper/5866-pointer-networks.pdf.

[31]

P. Veličković, G. Cucurull, A. Casanova, A. Romero, P. Liò, Y. Bengio, Graph attention networks, in: ICLR, 2018, pp. 1-12.

[32]

Z. Yang, P. Qi, S. Zhang, Y. Bengio, W.W. Cohen, R. Salakhutdinov, C.D. Manning, Hotpotqa: A dataset for diverse, explainable multi-hop question answering, in: EMNLP, 2018, pp. 2369-2380.

[33]

N.F. Rajani, B. McCann, C. Xiong, R. Socher, Explain yourself! leveraging language models for commonsense reasoning, in: ACL, 2019.

AI Summary AI Mindmap
PDF (546KB)

292

Accesses

0

Citation

Detail

Sections
Recommended

AI思维导图

/