A Survey of Large Language Models

Wayne Xin Zhao , Kun Zhou , Junyi Li , Tianyi Tang , Zican Dong , Yupeng Hou , Beichen Zhang , Yingqian Min , Junjie Zhang , Peiyu Liu , Xiaolei Wang , Yifan Du , Chen Yang , Yushuo Chen , Zhipeng Chen , Jinhao Jiang , Ruiyang Ren , Yifan Li , Xinyu Tang , Zikang Liu , Yiwen Hu , Jian-Yun Nie , Ji-Rong Wen

Front. Comput. Sci. ››

PDF (20924KB)
Front. Comput. Sci. ›› DOI: 10.1007/s11704-026-60308-3
REVIEW ARTICLE
A Survey of Large Language Models
Author information +
History +
PDF (20924KB)

Abstract

The rapid evolution of large language models (LLMs) has driven a transformative shift in artificial intelligence (AI), reshaping both research paradigms and practical applications. Distinguished from their predecessors by unprecedented scale and advanced capabilities, LLMs necessitate new frameworks for understanding their development, behavior, and societal impact. This survey systematically reviews recent advancements in LLM techniques across four key dimensions: (1) pre-training methodologies, which establish core model capabilities through large-scale self-supervised training, architectural innovations, and data curation strategies; (2) post-training techniques, including supervised fine-tuning and reinforcement learning, which adapt foundational models to downstream tasks and enhance their alignment and safety; (3) utilization strategies, such as in-context learning, prompt engineering, and agentic reasoning, that optimize real-world deployment and enable effective interaction with external environments; and (4) evaluation methods, encompassing benchmarks for key ability dimensions such as core language capabilities, reasoning, and safety, which support comprehensive and reliable assessment of model performance. Additionally, we identify critical research issues, including those concerning theoretical foundations, efficient scaling, alignment, and agentic capability, and highlight the open challenges they present. By synthesizing state-of-the-art insights and emerging trends, this survey aims to provide a systematic and comprehensive framework for understanding the trajectory, current limitations, and future directions of LLM progress.

Keywords

Large Language Models / Pre-training / Post-training / Utilization / Evaluation

Cite this article

Download citation ▾
Wayne Xin Zhao, Kun Zhou, Junyi Li, Tianyi Tang, Zican Dong, Yupeng Hou, Beichen Zhang, Yingqian Min, Junjie Zhang, Peiyu Liu, Xiaolei Wang, Yifan Du, Chen Yang, Yushuo Chen, Zhipeng Chen, Jinhao Jiang, Ruiyang Ren, Yifan Li, Xinyu Tang, Zikang Liu, Yiwen Hu, Jian-Yun Nie, Ji-Rong Wen. A Survey of Large Language Models. Front. Comput. Sci. DOI:10.1007/s11704-026-60308-3

登录浏览全文

4963

注册一个新账户 忘记密码

References

RIGHTS & PERMISSIONS

Higher Education Press 2026

PDF (20924KB)

0

Accesses

0

Citation

Detail

Sections
Recommended

/