Position: How can Graphs Help Large Language Models?

Xiyuan Wang , Yi Hu , Yanbo Wang , Chuan Shi , Muhan Zhang

Front. Comput. Sci. ››

PDF (2387KB)
Front. Comput. Sci. ›› DOI: 10.1007/s11704-026-51651-6
REVIEW ARTICLE
Position: How can Graphs Help Large Language Models?
Author information +
History +
PDF (2387KB)

Abstract

With the rapid advancement of large language models (LLMs), classic graph learning tasks have greatly benefited from LLMs, including improved encoding of textual features, more efficient construction of graphs from text, and enhanced reasoning over knowledge graphs. In this paper, we ask a complementary question: “How can graphs help LLMs?” We address this question from three perspectives: 1) graphs provide an up-to-date knowledge source that helps reduce LLM hallucinations, 2) graph-based prompting techniques—such as Chain-of-Thought (CoT), Tree-of-Thought (ToT), and Graph-of-Thought (GoT)—enhance LLM reasoning capabilities, and 3) integrating graphs into LLMs improves their understanding of structured data, expanding their applicability to domains such as e-commerce, code, and relational databases (RDBs). We further outlook some future directions including designing sparse LLM architectures based on graphs and brain-inspired memory systems.

Keywords

Graphs / LLMs / GNNs / Knowledge Graphs

Cite this article

Download citation ▾
Xiyuan Wang, Yi Hu, Yanbo Wang, Chuan Shi, Muhan Zhang. Position: How can Graphs Help Large Language Models?. Front. Comput. Sci. DOI:10.1007/s11704-026-51651-6

登录浏览全文

4963

注册一个新账户 忘记密码

References

RIGHTS & PERMISSIONS

Higher Education Press 2026

PDF (2387KB)

0

Accesses

0

Citation

Detail

Sections
Recommended

/