A comprehensive taxonomy of prompt engineering techniques for large language models
Yao-Yang LIU , Zhen ZHENG , Feng ZHANG , Jin-Cheng FENG , Yi-Yang FU , Ji-Dong ZHAI , Bing-Sheng HE , Xiao ZHANG , Xiao-Yong DU
Front. Comput. Sci. ›› 2026, Vol. 20 ›› Issue (3) : 2003601
A comprehensive taxonomy of prompt engineering techniques for large language models
Large Language Models (LLMs) have demonstrated remarkable performance across various downstream tasks, as evidenced by numerous studies. Since 2022, generative AI has shown significant potential in diverse application domains, including gaming, film and television, media, and finance. By 2023, the global AI-generated content (AIGC) industry had attracted over $26 billion in investment. As LLMs become increasingly prevalent, prompt engineering has emerged as a key research area to enhance user-AI interactions and improve LLM performance. The prompt, which serves as the input instruction for the LLM, is closely linked to the model’s responses. Prompt engineering refines the content and structure of prompts, thereby enhancing the performance of LLMs without changing the underlying model parameters. Despite significant advancements in prompt engineering, a comprehensive and systematic summary of existing techniques and their practical applications remains absent. To fill this gap, we investigate existing techniques and applications of prompt engineering. We conduct a thorough review and propose a novel taxonomy that provides a foundational framework for prompt construction. This taxonomy categorizes prompt engineering into four distinct aspects: profile and instruction, knowledge, reasoning and planning, and reliability. By providing a structured framework for understanding its various dimensions, we aim to facilitate the systematic design of prompts. Furthermore, we summarize existing prompt engineering techniques and explore the applications of LLMs across various domains, highlighting their interrelation with prompt engineering strategies. This survey underscores the progress of prompt engineering and its critical role in advancing AI applications, ultimately aiming to provide a systematic reference for future research and applications.
prompt engineering / large language models / AI agents / survey / taxonomy
| [1] |
|
| [2] |
|
| [3] |
|
| [4] |
|
| [5] |
|
| [6] |
|
| [7] |
|
| [8] |
|
| [9] |
|
| [10] |
|
| [11] |
|
| [12] |
|
| [13] |
|
| [14] |
|
| [15] |
|
| [16] |
|
| [17] |
|
| [18] |
|
| [19] |
Zheng M, Pei J, Logeswaran L, Lee M, Jurgens D. When ”a helpful assistant” is not really helpful: Personas in system prompts do not improve performances of large language models. In: Findings of the Association for Computational Linguistics: EMNLP 2024, Miami, Florida, USA, November 12-16, 2024. 2024, 15126–15154 |
| [20] |
|
| [21] |
Efrat A, Levy O. The turking test: can language models understand instructions? 2020, arXiv preprint arXiv: 2020 |
| [22] |
|
| [23] |
|
| [24] |
|
| [25] |
|
| [26] |
Jiang Z, Xu F F, Gao L, Sun Z, Liu Q, Dwivedi-Yu J, Yang Y, Callan J, Neubig G. Active retrieval augmented generation. In: Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, EMNLP 2023, Singapore, December 6-10, 2023. 2023, 7969–7992 |
| [27] |
|
| [28] |
Huang J, Chang K C C. Towards reasoning in large language models: A survey. In: Findings of the Association for Computational Linguistics: ACL 2023. July 2023, 1049–1065 |
| [29] |
|
| [30] |
|
| [31] |
|
| [32] |
|
| [33] |
|
| [34] |
Wei J, Tay Y, Bommasani R, Raffel C, Zoph B, Borgeaud S, Yogatama D, Bosma M, Zhou D, Metzler D, Chi E H, Hashimoto T, Vinyals O, Liang P, Dean J, Fedus W. Emergent abilities of large language models. Transactions on Machine Learning Research, 2022. Survey Certification |
| [35] |
|
| [36] |
|
| [37] |
|
| [38] |
|
| [39] |
|
| [40] |
|
| [41] |
|
| [42] |
|
| [43] |
|
| [44] |
|
| [45] |
|
| [46] |
|
| [47] |
|
| [48] |
|
| [49] |
|
| [50] |
|
| [51] |
|
| [52] |
|
| [53] |
|
| [54] |
|
| [55] |
Yiquan W, Yuhang L, Yifei L, Ang L, Siying Z, Kun K. wisdominterrogatory. see https://github.com/zhihaiLLM/wisdomInterrogatory website |
| [56] |
|
| [57] |
|
| [58] |
|
| [59] |
OpenAI. See openai.com/index/sora/ website, 2024 |
| [60] |
|
| [61] |
|
| [62] |
|
| [63] |
|
| [64] |
|
| [65] |
Radford A, Wu J, Child R, Luan D, Amodei D, Sutskever I, others. Language models are unsupervised multitask learners. OpenAI blog, 2019, 1(8): 9 |
| [66] |
|
| [67] |
|
| [68] |
|
| [69] |
|
| [70] |
|
| [71] |
|
| [72] |
|
| [73] |
|
| [74] |
|
| [75] |
|
| [76] |
|
| [77] |
|
| [78] |
|
| [79] |
|
| [80] |
|
| [81] |
|
| [82] |
|
| [83] |
|
| [84] |
|
| [85] |
|
| [86] |
|
| [87] |
|
| [88] |
|
| [89] |
|
| [90] |
|
| [91] |
|
| [92] |
|
| [93] |
|
| [94] |
|
| [95] |
Zhou Y, Liu Z, Jin J, Nie J-Y, Dou Z. Metacognitive retrieval-augmented large language models. In: Proceedings of the ACM Web Conference 2024. 2024 |
| [96] |
|
| [97] |
|
| [98] |
|
| [99] |
|
| [100] |
|
| [101] |
|
| [102] |
|
| [103] |
|
| [104] |
|
| [105] |
|
| [106] |
|
| [107] |
|
| [108] |
Fu Y, Peng H, Sabharwal A, Clark P, Khot T. Complexity-based prompting for multi-step reasoning. In: Proceedings of the 11th International Conference on Learning Representations. 2023 |
| [109] |
|
| [110] |
|
| [111] |
|
| [112] |
|
| [113] |
Luo X, Zhu Q, Zhang Z, Qin L, Wang X, Yang Q, Xu D, Che W. Multipot: Multilingual program of thoughts harnesses multiple programming languages. arXiv e-prints, 2024, arXiv–2402 |
| [114] |
|
| [115] |
|
| [116] |
|
| [117] |
|
| [118] |
|
| [119] |
|
| [120] |
|
| [121] |
|
| [122] |
|
| [123] |
|
| [124] |
|
| [125] |
|
| [126] |
|
| [127] |
|
| [128] |
|
| [129] |
|
| [130] |
|
| [131] |
|
| [132] |
|
| [133] |
|
| [134] |
|
| [135] |
|
| [136] |
|
| [137] |
|
| [138] |
|
| [139] |
|
| [140] |
|
| [141] |
|
| [142] |
|
| [143] |
|
| [144] |
|
| [145] |
|
| [146] |
|
| [147] |
|
| [148] |
|
| [149] |
|
| [150] |
Gao T, Fisch A, Chen D. Making pre-trained language models better few-shot learners. arXiv preprint arXiv:2012.15723, 2020 |
| [151] |
|
| [152] |
|
| [153] |
|
| [154] |
|
| [155] |
|
| [156] |
|
| [157] |
|
| [158] |
|
| [159] |
OpenAI. chatgpt. see the website of OpenAI 2024–5-10 |
| [160] |
|
| [161] |
Microsoft. copilot. See the website of copilot.microsoft.com2024-5-10 |
| [162] |
Anthropic. The claude 3 model family: opus, sonnet, Haiku. See the website of api.semanticscholar.org/CorpusID:268232499 2024 |
| [163] |
|
| [164] |
Jagerman R, Zhuang H, Qin Z, Wang X, Bendersky M. Query expansion by prompting large language models. ArXiv, 2023, abs/2305.03653 |
| [165] |
|
| [166] |
|
| [167] |
|
| [168] |
|
| [169] |
Hongcheng Liu Y M Y WY. L. Xiezhi chinese law large language model. see https://github.com/LiuHC0428/LAW_GPT website, 2023 |
| [170] |
|
| [171] |
|
| [172] |
|
| [173] |
|
| [174] |
|
| [175] |
|
| [176] |
|
| [177] |
|
| [178] |
|
| [179] |
|
| [180] |
|
| [181] |
|
| [182] |
|
| [183] |
|
| [184] |
|
| [185] |
|
| [186] |
|
| [187] |
|
| [188] |
|
| [189] |
|
| [190] |
OpenAI. o1. see https://openai.com/o1/ website2024 |
| [191] |
|
| [192] |
Yoran O, Wolfson T, Ram O, Berant J. Making retrieval-augmented language models robust to irrelevant context. arXiv preprint arXiv:2310.01558, 2023 |
| [193] |
Microsoft. GraphRAG. See the website of www.microsoft.com/en-us/research/project/graphrag/, 2024 |
| [194] |
|
| [195] |
|
| [196] |
|
The Author(s) 2025. This article is published with open access at link.springer.com and journal.hep.com.cn
/
| 〈 |
|
〉 |