A comprehensive taxonomy of prompt engineering techniques for large language models

Yao-Yang LIU , Zhen ZHENG , Feng ZHANG , Jin-Cheng FENG , Yi-Yang FU , Ji-Dong ZHAI , Bing-Sheng HE , Xiao ZHANG , Xiao-Yong DU

Front. Comput. Sci. ›› 2026, Vol. 20 ›› Issue (3) : 2003601

PDF (2957KB)
Front. Comput. Sci. ›› 2026, Vol. 20 ›› Issue (3) : 2003601 DOI: 10.1007/s11704-025-50058-z
Information Systems
REVIEW ARTICLE

A comprehensive taxonomy of prompt engineering techniques for large language models

Author information +
History +
PDF (2957KB)

Abstract

Large Language Models (LLMs) have demonstrated remarkable performance across various downstream tasks, as evidenced by numerous studies. Since 2022, generative AI has shown significant potential in diverse application domains, including gaming, film and television, media, and finance. By 2023, the global AI-generated content (AIGC) industry had attracted over $26 billion in investment. As LLMs become increasingly prevalent, prompt engineering has emerged as a key research area to enhance user-AI interactions and improve LLM performance. The prompt, which serves as the input instruction for the LLM, is closely linked to the model’s responses. Prompt engineering refines the content and structure of prompts, thereby enhancing the performance of LLMs without changing the underlying model parameters. Despite significant advancements in prompt engineering, a comprehensive and systematic summary of existing techniques and their practical applications remains absent. To fill this gap, we investigate existing techniques and applications of prompt engineering. We conduct a thorough review and propose a novel taxonomy that provides a foundational framework for prompt construction. This taxonomy categorizes prompt engineering into four distinct aspects: profile and instruction, knowledge, reasoning and planning, and reliability. By providing a structured framework for understanding its various dimensions, we aim to facilitate the systematic design of prompts. Furthermore, we summarize existing prompt engineering techniques and explore the applications of LLMs across various domains, highlighting their interrelation with prompt engineering strategies. This survey underscores the progress of prompt engineering and its critical role in advancing AI applications, ultimately aiming to provide a systematic reference for future research and applications.

Graphical abstract

Keywords

prompt engineering / large language models / AI agents / survey / taxonomy

Cite this article

Download citation ▾
Yao-Yang LIU, Zhen ZHENG, Feng ZHANG, Jin-Cheng FENG, Yi-Yang FU, Ji-Dong ZHAI, Bing-Sheng HE, Xiao ZHANG, Xiao-Yong DU. A comprehensive taxonomy of prompt engineering techniques for large language models. Front. Comput. Sci., 2026, 20(3): 2003601 DOI:10.1007/s11704-025-50058-z

登录浏览全文

4963

注册一个新账户 忘记密码

References

[1]

Zhang R, Su Y, Trisedya B D, Zhao X, Yang M, Cheng H, Qi J . AutoAlign: fully automatic and effective knowledge graph alignment enabled by large language models. IEEE Transactions on Knowledge and Data Engineering, 2024, 36( 6): 2357–2371

[2]

He J, Li Y, Zhai Z, Fang B, Thorne C, Druckenbrodt C, Akhondi S, Verspoor K . Focused contrastive loss for classification with pre-trained language models. IEEE Transactions on Knowledge and Data Engineering, 2024, 36( 7): 3047–3061

[3]

Zhao Z, Fan W, Li J, Liu Y, Mei X, Wang Y, Wen Z, Wang F, Zhao X, Tang J, Li Q . Recommender systems in the era of large language models (LLMs). IEEE Transactions on Knowledge and Data Engineering, 2024, 36( 11): 6889–6907

[4]

Li J, Liu Y, Fan W, Wei X-Y, Liu H, Tang J, Li Q . Empowering molecule discovery for molecule-caption translation with large language models: a ChatGPT perspective. IEEE Transactions on Knowledge and Data Engineering, 2024, 36( 11): 6071–6083

[5]

Cao Z, Cao C, Xu J, Xu J, Chen Z, Chen Z, Ma X . SCG-tree: shortcut enhanced graph hierarchy tree for efficient spatial queries on massive road networks. Frontiers of Computer Science, 2025, 19( 9): 199610

[6]

Liu P, Cai P, Zhong K, Li C, Chen H . LRP: learned robust data partitioning for efficient processing of large dynamic queries. Frontiers of Computer Science, 2025, 19( 9): 199607

[7]

Chen C, Ma W, Gao C, Zhang W, Zeng K, Ye T, Chen Y, Du X . GaussDB-AISQL: a composable cloud-native SQL system with AI capabilities. Frontiers of Computer Science, 2025, 19( 9): 199608

[8]

Yang J-Q, Dai C, Ou D, Li D, Huang J, Zhan D-C, Zeng X, Yang Y . COURIER: contrastive user intention reconstruction for large-scale visual recommendation. Frontiers of Computer Science, 2025, 19( 7): 197602

[9]

Zhang T, Zhao J, Yu C, Chen L, Gao Y, Cao B, Fan J, Yu G . Labeling-based centrality approaches for identifying critical edges on temporal graphs. Frontiers of Computer Science, 2025, 19( 2): 192601

[10]

Wang D, Cai P, Qian W, Zhou A . Efficient and stable quorum-based log replication and replay for modern cluster-databases. Frontiers of Computer Science, 2022, 16( 5): 165612

[11]

Li X-H, Cao C C, Shi Y, Bai W, Gao H, Qiu L, Wang C, Gao Y, Zhang S, Xue X, Chen L . A survey of data-driven and knowledge-aware explainable AI. IEEE Transactions on Knowledge and Data Engineering, 2022, 34( 1): 29–49

[12]

Chen L, Zhou X, Yang X, Sellis T . Guest editorial special issue on online recommendation using AI and big data techniques. IEEE Transactions on Knowledge and Data Engineering, 2023, 35( 10): 9809–9811

[13]

Brown T B, Mann B, Ryder N, Subbiah M, Kaplan J, Dhariwal P, Neelakantan A, Shyam P, Sastry G, Askell A, Agarwal S, Herbert-Voss A, Krueger G, Henighan T, Child R, Ramesh A, Ziegler D M, Wu J, Winter C, Hesse C, Chen M, Sigler E, Litwin M, Gray S, Chess B, Clark J, Berner C, McCandlish S, Radford A, Sutskever I, Amodei D. Language models are few-shot learners. In: Proceedings of the 34th International Conference on Neural Information Processing System. 2020

[14]

Zhu Y, Wang Y, Qiang J, Wu X . Prompt-learning for short text classification. IEEE Transactions on Knowledge and Data Engineering, 2024, 36( 10): 5328–5339

[15]

Xue H, Salim F D . PromptCast: a new prompt-based learning paradigm for time series forecasting. IEEE Transactions on Knowledge and Data Engineering, 2024, 36( 11): 6851–6864

[16]

Liu J, Fei H, Li F, Li J, Li B, Zhao L, Teng C, Ji D . TKDP: threefold knowledge-enriched deep prompt tuning for few-shot named entity recognition. IEEE Transactions on Knowledge and Data Engineering, 2024, 36( 11): 6397–6409

[17]

Abercrombie G, Curry A C, Dinkar T, Rieser V, Talat Z. Mirages. On anthropomorphism in dialogue systems. In: Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing. 2023

[18]

Van Buren D. Guided scenarios with simulated expert personae: a remarkable strategy to perform cognitive work. 2023, arXiv preprint arXiv: 2306.03104

[19]

Zheng M, Pei J, Logeswaran L, Lee M, Jurgens D. When ”a helpful assistant” is not really helpful: Personas in system prompts do not improve performances of large language models. In: Findings of the Association for Computational Linguistics: EMNLP 2024, Miami, Florida, USA, November 12-16, 2024. 2024, 15126–15154

[20]

Park J S, O’Brien J, Cai C J, Morris M R, Liang P, Bernstein M S. Generative agents: interactive simulacra of human behavior. In: Proceedings of the 36th Annual ACM Symposium on User Interface Software and Technology. 2023

[21]

Efrat A, Levy O. The turking test: can language models understand instructions? 2020, arXiv preprint arXiv: 2020

[22]

Chen J, Lin H, Han X, Sun L. Benchmarking large language models in retrieval-augmented generation. In: Proceedings of the 38th AAAI Conference on Artificial Intelligence. 2024

[23]

Tang Y, Yang Y. MultiHop-RAG: benchmarking retrieval-augmented generation for multi-hop queries. 2024, arXiv preprint arXiv: 2401.15391

[24]

Wang L, Yang N, Wei F. Query2doc: query expansion with large language models. In: Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing. 2023

[25]

Cheng X, Luo D, Chen X, Liu L, Zhao D, Yan R. Lift yourself up: retrieval-augmented text generation with self-memory. In: Proceedings of the 37th International Conference on Neural Information Processing Systems. 2023, 1899

[26]

Jiang Z, Xu F F, Gao L, Sun Z, Liu Q, Dwivedi-Yu J, Yang Y, Callan J, Neubig G. Active retrieval augmented generation. In: Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, EMNLP 2023, Singapore, December 6-10, 2023. 2023, 7969–7992

[27]

Qiao S, Ou Y, Zhang N, Chen X, Yao Y, Deng S, Tan C, Huang F, Chen H. Reasoning with language model prompting: a survey. In: Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics. 2023

[28]

Huang J, Chang K C C. Towards reasoning in large language models: A survey. In: Findings of the Association for Computational Linguistics: ACL 2023. July 2023, 1049–1065

[29]

Yu F, Zhang H, Tiwari P, Wang B . Natural language reasoning, a survey. ACM Computing Surveys, 2024, 56( 12): 304

[30]

Wei J, Wang X, Schuurmans D, Bosma M, Ichter B, Xia F, Chi E D, Le Q V, Zhou D. Chain-of-thought prompting elicits reasoning in large language models. In: Proceedings of the 36th International Conference on Neural Information Processing System. 2022, 1800

[31]

Long J. Large language model guided tree-of-thought. 2023, arXiv preprint arXiv: 2305.08291

[32]

Yao Y, Li Z, Zhao H. Beyond chain-of-thought, effective graph-of-thought reasoning in language models. 2023, arXiv preprint arXiv: 2305.16582

[33]

Liu P, Yuan W, Fu J, Jiang Z, Hayashi H, Neubig G . Pre-train, prompt, and predict: a systematic survey of prompting methods in natural language processing. ACM Computing Surveys, 2023, 55( 9): 195

[34]

Wei J, Tay Y, Bommasani R, Raffel C, Zoph B, Borgeaud S, Yogatama D, Bosma M, Zhou D, Metzler D, Chi E H, Hashimoto T, Vinyals O, Liang P, Dean J, Fedus W. Emergent abilities of large language models. Transactions on Machine Learning Research, 2022. Survey Certification

[35]

Sahoo P, Singh A K, Saha S, Jain V, Mondal S, Chadha A. A systematic survey of prompt engineering in large language models: techniques and applications. 2024, arXiv preprint arXiv: 2402.07927

[36]

Li H, Leung J, Shen Z. Towards goal-oriented prompt engineering for large language models: a survey. 2024, arXiv preprint arXiv: 2401.14043

[37]

Mialon G, Dessì R, Lomeli M, Nalmpantis C, Pasunuru R, Raileanu R, Rozière B, Schick T, Dwivedi-Yu J, Celikyilmaz A, Grave E, LeCun Y, Scialom T. Augmented language models: a survey. Transactions on Machine Learning Research, 2023, 2023

[38]

Shin T, Razeghi Y, Logan IV R L, Wallace E, Singh S. AutoPrompt: eliciting knowledge from language models with automatically generated prompts. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing. 2020

[39]

Petroni F, Rocktäschel T, Riedel S, Lewis P, Bakhtin A, Wu Y, Miller A. Language models as knowledge bases? In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing. 2019

[40]

Cui L, Wu Y, Liu J, Yang S, Zhang Y. Template-based named entity recognition using BART. In: Proceedings of the Findings of the Association for Computational Linguistics. 2021

[41]

Vu T, Lester B, Constant N, Al-Rfou’ R, Cer D. SPoT: better frozen model adaptation through soft prompt transfer. In: Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics. 2022

[42]

Wen Y, Jain N, Kirchenbauer J, Goldblum M, Geiping J, Goldstein T. Hard prompts made easy: gradient-based discrete optimization for prompt tuning and discovery. In: Proceedings of the 37th International Conference on Neural Information Processing Systems. 2023, 2019

[43]

Liu B, Jiang Y, Zhang X, Liu Q, Zhang S, Biswas J, Stone P. LLM+P: empowering large language models with optimal planning proficiency. 2023, arXiv preprint arXiv: 2304.11477

[44]

Zhou W, Zhang S, Poon H, Chen M. Context-faithful prompting for large language models. In: Proceedings of the Findings of the Association for Computational Linguistics. 2023

[45]

Li S, Ning X, Wang L, Liu T, Shi X, Yan S, Dai G, Yang H, Wang Y. Evaluating quantized large language models. In: Proceedings of the 41st International Conference on Machine Learning. 2024

[46]

Dong Q, Li L, Dai D, Zheng C, Ma J, Li R, Xia H, Xu J, Wu Z, Chang B, Sun X, Li L, Sui Z. A survey on in-context learning. In: Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing. 2024

[47]

Wang L, Ma C, Feng X, Zhang Z, Yang H, Zhang J, Chen Z, Tang J, Chen X, Lin Y, Zhao W X, Wei Z, Wen J . A survey on large language model based autonomous agents. Frontiers of Computer Science, 2024, 18( 6): 186345

[48]

Yang K, Liu J, Wu J, Yang C, Fung Y R, Li S, Huang Z, Cao X, Wang X, Wang Y, Ji H, Zhai C. If LLM is the wizard, then code is the wand: a survey on how code empowers large language models to serve as intelligent agents. 2024, arXiv preprint arXiv: 2401.00812

[49]

Roy D, Zhang X, Bhave R, Bansal C, Las-Casas P, Fonseca R, Rajmohan S. Exploring LLM-based agents for root cause analysis. In: Proceedings of the 32nd ACM International Conference on the Foundations of Software Engineering. 2024

[50]

Chen M, Tworek J, Jun H, Yuan Q, de Oliveira Pinto H P, Kaplan J, Edwards H, Burda Y, Joseph N, Brockman G, Ray A, Puri R, Krueger G, Petrov M, Khlaaf H, Sastry G, Mishkin P, Chan B, Gray S, Ryder N, Pavlov M, Power A, Kaiser L, Bavarian M, Winter C, Tillet P, Such F P, Cummings D, Plappert M, Chantzis F, Barnes E, Herbert-Voss A, Guss W H, Nichol A, Paino A, Tezak N, Tang J, Babuschkin I, Balaji S, Jain S, Saunders W, Hesse C, Carr A N, Leike J, Achiam J, Misra V, Morikawa E, Radford A, Knight M, Brundage M, Murati M, Mayer K, Welinder P, McGrew B, Amodei D, McCandlish S, Sutskever I, Zaremba W. Evaluating large language models trained on code. 2021, arXiv preprint arXiv: 2107.03374

[51]

Nijkamp E, Pang B, Hayashi H, Tu L, Wang H, Zhou Y, Savarese S, Xiong C. CodeGen: an open large language model for code with multi-turn program synthesis. In: Proceedings of the 11th International Conference on Learning Representations. 2023

[52]

Driess D, Xia F, Sajjadi M S M, Lynch C, Chowdhery A, Ichter B, Wahid A, Tompson J, Vuong Q, Yu T, Huang W, Chebotar Y, Sermanet P, Duckworth D, Levine S, Vanhoucke V, Hausman K, Toussaint M, Greff K, Zeng A, Mordatch I, Florence P. PaLM-E: an embodied multimodal language model. In: Proceedings of the 40th International Conference on Machine Learning. 2023

[53]

Li Y, Li Z, Zhang K, Dan R, Jiang S, Zhang Y . ChatDoctor: a medical chat model fine-tuned on a large language model meta-AI (LLaMA) using medical domain knowledge. Cureus, 2023, 15( 6): e40895

[54]

Yue S, Chen W, Wang S, Li B, Shen C, Liu S, Zhou Y, Xiao Y, Yun S, Huang X, Wei Z. DISC-LawLLM: fine-tuning large language models for intelligent legal services. 2023, arXiv preprint arXiv: 2309.11325

[55]

Yiquan W, Yuhang L, Yifei L, Ang L, Siying Z, Kun K. wisdominterrogatory. see https://github.com/zhihaiLLM/wisdomInterrogatory website

[56]

Chakrabarty T, Padmakumar V, He H. Help me write a poem: instruction tuning as a vehicle for collaborative poetry writing. In: Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing. 2022

[57]

Ippolito D, Yuan A, Coenen A, Burnam S. Creative writing with an AI-powered writing assistant: perspectives from professional writers. 2022, arXiv preprint arXiv: 2211.05030

[58]

Yuan R, Lin H, Wang Y, Tian Z, Wu S, Shen T, Zhang G, Wu Y, Liu C, Zhou Z, Ma Z, Xue L, Wang Z, Liu Q, Zheng T, Li Y, Ma Y, Liang Y, Chi X, Liu R, Wang Z, Lin C, Liu Q, Jiang T, Huang W, Chen W, Chen W, Fu J, Benetos E, Xia G, Dannenberg R, Xue W, Kang S, Guo Y. ChatMusician: understanding and generating music intrinsically with LLM. In: Proceedings of the Findings of the Association for Computational Linguistics. 2024

[59]

OpenAI. See openai.com/index/sora/ website, 2024

[60]

Reynolds L, McDonell K. Prompt programming for large language models: beyond the few-shot paradigm. In: Proceedings of 2021 CHI Conference on Human Factors in Computing Systems. 2021, 314

[61]

Xu X, Tao C, Shen T, Xu C, Xu H, Long G, Lou J-G, Ma S. Re-reading improves reasoning in large language models. In: Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing. 2024

[62]

Wang Z, Cai S, Chen G, Liu A, Ma X, Liang Y, Team CraftJarvis. Describe, explain, plan and select: interactive planning with large language models enables open-world multi-task agents. In: Proceedings of the 37th International Conference on Neural Information Processing Systems. 2023, 1480

[63]

Zhu X, Chen Y, Tian H, Tao C, Su W, Yang C, Huang G, Li B, Lu L, Wang X, Qiao Y, Zhang Z, Dai J. Ghost in the minecraft: generally capable agents for open-world environments via large language models with text-based knowledge and memory. 2023, arXiv preprint arXiv: 2305.17144

[64]

Kojima T, Gu S S, Reid M, Matsuo Y, Iwasawa Y. Large language models are zero-shot reasoners. In: Proceedings of the 36th International Conference on Neural Information Processing Systems. 2022, 1613

[65]

Radford A, Wu J, Child R, Luan D, Amodei D, Sutskever I, others. Language models are unsupervised multitask learners. OpenAI blog, 2019, 1(8): 9

[66]

Logan IV R, Balazevic I, Wallace E, Petroni F, Singh S, Riedel S. Cutting down on prompts and parameters: simple few-shot learning with language models. In: Proceedings of the Findings of the Association for Computational Linguistics. 2022

[67]

Liu J, Shen D, Zhang Y, Dolan B, Carin L, Chen W. What makes good in-context examples for GPT-3? In: Proceedings of Deep Learning Inside Out (DeeLIO 2022): the 3rd Workshop on Knowledge Extraction and Integration for Deep Learning Architectures. 2022

[68]

Su H, Kasai J, Wu C H, Shi W, Wang T, Xin J, Zhang R, Ostendorf M, Zettlemoyer L, Smith N A, Yu T. Selective annotation makes language models better few-shot learners. In: Proceedings of the 11th International Conference on Learning Representations. 2023

[69]

Jiang Z, Xu F F, Araki J, Neubig G . How can we know what language models know?. Transactions of the Association for Computational Linguistics, 2020, 8: 423–438

[70]

Lu Y, Bartolo M, Moore A, Riedel S, Stenetorp P. Fantastically ordered prompts and where to find them: overcoming few-shot prompt order sensitivity. In: Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics. 2022

[71]

Nie F, Chen M, Zhang Z, Cheng X. Improving few-shot performance of language models via nearest neighbor calibration. 2022, arXiv preprint arXiv: 2212.02216

[72]

Fei Y, Hou Y, Chen Z, Bosselut A. Mitigating label biases for in-context learning. In: Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics. 2023

[73]

Ma H, Zhang C, Bian Y, Liu L, Zhang Z, Zhao P, Zhang S, Fu H, Hu Q, Wu B. Fairness-guided few-shot prompting for large language models. In: Proceedings of the 37th International Conference on Neural Information Processing Systems. 2023

[74]

Reif Y, Schwartz R. Beyond performance: quantifying and mitigating label bias in LLMs. In: Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. 2024

[75]

Han Z, Hao Y, Dong L, Sun Y, Wei F. Prototypical calibration for few-shot learning of language models. In: Proceedings of the 11th International Conference on Learning Representations. 2023.

[76]

Hu L, Liu Z, Zhao Z, Hou L, Nie L, Li J . A survey of knowledge enhanced pre-trained language models. IEEE Transactions on Knowledge and Data Engineering, 2024, 36( 4): 1413–1430

[77]

Lewis P, Perez E, Piktus A, Petroni F, Karpukhin V, Goyal N, Küttler H, Lewis M, Yih W T, Rocktäschel T, Riedel S, Kiela D. Retrieval-augmented generation for knowledge-intensive NLP tasks. In: Proceedings of the 34th International Conference on Neural Information Processing Systems. 2020, 793

[78]

Pan S, Luo L, Wang Y, Chen C, Wang J, Wu X . Unifying large language models and knowledge graphs: a roadmap. IEEE Transactions on Knowledge and Data Engineering, 2024, 36( 7): 3580–3599

[79]

Yang L, Chen H, Li Z, Ding X, Wu X . Give us the facts: enhancing large language models with knowledge graphs for fact-aware language modeling. IEEE Transactions on Knowledge and Data Engineering, 2024, 36( 7): 3091–3110

[80]

Barnett S, Kurniawan S, Thudumu S, Brannelly Z, Abdelrazek M. Seven failure points when engineering a retrieval augmented generation system. In: Proceedings of the IEEE/ACM 3rd International Conference on AI Engineering - Software Engineering for AI. 2024

[81]

Shao Z, Gong Y, Shen Y, Huang M, Duan N, Chen W. Enhancing retrieval-augmented large language models with iterative retrieval-generation synergy. In: Proceedings of the Findings of the Association for Computational Linguistics. 2023

[82]

Thakur N, Reimers N, RückléA, Srivastava A, Gurevych I. BEIR: a heterogenous benchmark for zero-shot evaluation of information retrieval models. 2021, arXiv preprint arXiv: 2104.08663

[83]

Nguyen T, Rosenberg M, Song X, Gao J, Tiwary S, Majumder R, Deng L. MS MARCO: a human generated machine reading comprehension dataset. In: Proceedings of the Workshop on Cognitive Computation: Integrating Neural and Symbolic Approaches 2016 Co-located with the 30th Annual Conference on Neural Information Processing Systems. 2016

[84]

Craswell N, Mitra B, Yilmaz E, Campos D, Voorhees E M. Overview of the TREC 2019 deep learning track. 2020, arXiv preprint arXiv: 2003.07820

[85]

Joshi M, Choi E, Weld D, Zettlemoyer L. TriviaQA: a large scale distantly supervised challenge dataset for reading comprehension. In: Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics. 2017

[86]

Ho X, Duong Nguyen A K, Sugawara S, Aizawa A. Constructing a multi-hop QA dataset for comprehensive evaluation of reasoning steps. In: Proceedings of the 28th International Conference on Computational Linguistics. 2020

[87]

Yang Z, Qi P, Zhang S, Bengio Y, Cohen W, Salakhutdinov R, Manning C D. HotpotQA: a dataset for diverse, explainable multi-hop question answering. In: Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing. 2018

[88]

Yu W, Zhang H, Pan X, Cao P, Ma K, Li J, Wang H, Yu D. Chain-of-note: enhancing robustness in retrieval-augmented language models. In: Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing. 2024

[89]

Melz E. Enhancing LLM intelligence with ARM-RAG: auxiliary rationale memory for retrieval augmented generation. 2023, arXiv preprint arXiv: 2311.04177

[90]

Cuconasu F, Trappolini G, Siciliano F, Filice S, Campagnano C, Maarek Y, Tonellotto N, Silvestri F. The power of noise: redefining retrieval for RAG systems. In: Proceedings of the 47th International ACM SIGIR Conference on Research and Development in Information Retrieval. 2024

[91]

Weston J, Sukhbaatar S. System 2 Attention (is something you might need too). 2023, arXiv preprint arXiv: 2311.11829

[92]

Vaswani A, Shazeer N, Parmar N, Uszkoreit J, Jones L, Gomez A N, Kaiser Ł, Polosukhin I. Attention is all you need. In: Proceedings of the 31st International Conference on Neural Information Processing Systems. 2017

[93]

Touvron H, Lavril T, Izacard G, Martinet X, Lachaux M A, Lacroix T, Rozière B, Goyal N, Hambro E, Azhar F, Rodriguez A, Joulin A, Grave E, Lample G. LLaMA: open and efficient foundation language models. 2023, arXiv preprint arXiv: 2302.13971

[94]

Shi F, Chen X, Misra K, Scales N, Dohan D, Chi E, Schärli N, Zhou D. Large language models can be easily distracted by irrelevant context. In: Proceedings of the 40th International Conference on Machine Learning. 2023, 1291

[95]

Zhou Y, Liu Z, Jin J, Nie J-Y, Dou Z. Metacognitive retrieval-augmented large language models. In: Proceedings of the ACM Web Conference 2024. 2024

[96]

Austin J T, Vancouver J B . Goal constructs in psychology: structure, process, and content. Psychological Bulletin, 1996, 120( 3): 338–375

[97]

Zhou D, Schärli N, Hou L, Wei J, Scales N, Wang X, Schuurmans D, Cui C, Bousquet O, Le Q V, Chi E H. Least-to-most prompting enables complex reasoning in large language models. In: Proceedings of the 11th International Conference on Learning Representations. 2023

[98]

Zhang Z, Zhang A, Li M, Smola A. Automatic chain of thought prompting in large language models. In: Proceedings of the 11th International Conference on Learning Representations. 2023

[99]

Reimers N, Gurevych I. Sentence-BERT: sentence embeddings using siamese BERT-networks. In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing. 2019

[100]

Diao S, Wang P, Lin Y, Pan R, Liu X, Zhang T. Active prompting with chain-of-thought for large language models. In: Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics. 2024

[101]

Lyu Q, Havaldar S, Stein A, Zhang L, Rao D, Wong E, Apidianaki M, Callison-Burch C. Faithful chain-of-thought reasoning. In: Proceedings of the 13th International Joint Conference on Natural Language Processing and the 3rd Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics. 2023

[102]

Liu T, Guo Q, Yang Y, Hu X, Zhang Y, Qiu X, Zhang Z. Plan, verify and switch: integrated reasoning with diverse X-of-thoughts. In: Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing. 2023

[103]

Liu R, Wei J, Gu S S, Wu T-Y, Vosoughi S, Cui C, Zhou D, Dai A M. Mind’s eye: grounded language model reasoning through simulation. In: Proceedings of the 11th International Conference on Learning Representations. 2023

[104]

Li L, Xu J, Dong Q, Zheng C, Sun X, Kong L, Liu Q. Can language models understand physical concepts? In: Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing. 2023

[105]

Wang Y, Duan J, Fox D, Srinivasa S. NEWTON: are large language models capable of physical reasoning? In: Proceedings of the Findings of the Association for Computational Linguistics. 2023

[106]

Todorov E, Erez T, Tassa Y. MuJoCo: a physics engine for model-based control. In: Proceedings of 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems. 2012

[107]

Drori I, Zhang S, Shuttleworth R, Tang L, Lu A, Ke E, Liu K, Cheng L, Tran S, Cheng N, Wang R, Singh N, Patti T L, Lynch J, Shporer A, Verma N, Wu E, Strang G. A neural network solves, explains, and generates university math problems by program synthesis and few-shot learning at human level. 2021, arXiv preprint arXiv: 2112.15594

[108]

Fu Y, Peng H, Sabharwal A, Clark P, Khot T. Complexity-based prompting for multi-step reasoning. In: Proceedings of the 11th International Conference on Learning Representations. 2023

[109]

Cheng Z, Xie T, Shi P, Li C, Nadkarni R, Hu Y, Xiong C, Radev D, Ostendorf M, Zettlemoyer L, Smith N A, Yu T. Binding language models in symbolic languages. In: Proceedings of the 11th International Conference on Learning Representations. 2023

[110]

Wu Y, Jiang A Q, Li W, Rabe M N, Staats C, Jamnik M, Szegedy C. Autoformalization with large language models. In: Proceedings of the 36th International Conference on Neural Information Processing Systems. 2022, 2344

[111]

Gao L, Madaan A, Zhou S, Alon U, Liu P, Yang Y, Callan J, Neubig G. PAL: program-aided language models. In: Proceedings of the 40th International Conference on Machine Learning. 2023

[112]

Chen W, Ma X, Wang X, Cohen W W. Program of thoughts prompting: disentangling computation from reasoning for numerical reasoning tasks. Transactions on Machine Learning Research, 2023, 2023

[113]

Luo X, Zhu Q, Zhang Z, Qin L, Wang X, Yang Q, Xu D, Che W. Multipot: Multilingual program of thoughts harnesses multiple programming languages. arXiv e-prints, 2024, arXiv–2402

[114]

Zeng A, Attarian M, Ichter B, Choromanski K M, Wong A, Welker S, Tombari F, Purohit A, Ryoo M S, Sindhwani V, Lee J, Vanhoucke V, Florence P. Socratic models: composing zero-shot multimodal reasoning with language. In: Proceedings of the 11th International Conference on Learning Representations. 2023

[115]

Parisi A, Zhao Y, Fiedel N. TALM: tool augmented language models. 2022, arXiv preprint arXiv: 2205.12255

[116]

Schick T, Dwivedi-Yu J, Dessí R, Raileanu R, Lomeli M, Hambro E, Zettlemoyer L, Cancedda N, Scialom T. Toolformer: language models can teach themselves to use tools. In: Proceedings of the 37th International Conference on Neural Information Processing Systems. 2023, 2997

[117]

Taylor R, Kardas M, Cucurull G, Scialom T, Hartshorn A, Saravia E, Poulton A, Kerkez V, Stojnic R. Galactica: a large language model for science. 2022, arXiv preprint arXiv: 2211.09085

[118]

Karpas E, Abend O, Belinkov Y, Lenz B, Lieber O, Ratner N, Shoham Y, Bata H, Levine Y, Leyton-Brown K, Muhlgay D, Rozen N, Schwartz E, Shachaf G, Shalev-Shwartz S, Shashua A, Tenenholtz M. MRKL systems: a modular, neuro-symbolic architecture that combines large language models, external knowledge sources and discrete reasoning. 2022, arXiv preprint arXiv: 2205.00445

[119]

Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro M T. ART: automatic multi-step reasoning and tool-use for large language models. 2023, arXiv preprint arXiv: 2303.09014

[120]

Lu P, Peng B, Cheng H, Galley M, Chang K-W, Wu Y N, Zhu S-C, Gao J. Chameleon: plug-and-play compositional reasoning with large language models. In: Proceedings of the 37th International Conference on Neural Information Processing Systems. 2023

[121]

Patil S G, Zhang T, Wang X, Gonzalez J E. Gorilla: large language model connected with massive APIs. In: Proceedings of the 38th International Conference on Neural Information Processing Systems. 2024

[122]

Shen Y, Song K, Tan X, Li D, Lu W, Zhuang Y. HuggingGPT: solving AI tasks with ChatGPT and its friends in hugging face. In: Proceedings of the 37th International Conference on Neural Information Processing Systems. 2023

[123]

Tang Q, Deng Z, Lin H, Han X, Liang Q, Cao B, Sun L. ToolAlpaca: generalized tool learning for language models with 3000 simulated cases. 2023, arXiv preprint arXiv: 2306.05301

[124]

Ruan J, Chen Y, Zhang B, Xu Z, Bao T, Du G, Shi S, Mao H, Li Z, Zeng X, Zhao R. TPTU: large language model-based AI agents for task planning and tool usage. 2023, arXiv preprint arXiv: 2308.03427

[125]

Kong Y, Ruan J, Chen Y, Zhang B, Bao T, Shi S, Qing D, Hu X, Mao H, Li Z, Zeng X, Zhao R, Wang X. TPTU-v2: boosting task planning and tool usage of large language model-based agents in real-world industry systems. In: Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing. 2024

[126]

Liang Y, Wu C, Song T, Wu W, Xia Y, Liu Y, Ou Y, Lu S, Ji L, Mao S, Wang Y, Shou L, Gong M, Duan N. TaskMatrix.AI: completing tasks by connecting foundation models with millions of APIs. 2023, arXiv preprint arXiv: 2303.16434

[127]

Madaan A, Tandon N, Gupta P, Hallinan S, Gao L, Wiegreffe S, Alon U, Dziri N, Prabhumoye S, Yang Y, Gupta S, Majumder B P, Hermann K, Welleck S, Yazdanbakhsh A, Clark P. Self-refine: iterative refinement with self-feedback. In: Proceedings of the 37th International Conference on Neural Information Processing Systems. 2023

[128]

Lu J, Zhong W, Huang W, Wang Y, Zhu Q, Mi F, Wang B, Wang W, Zeng X, Shang L, Jiang X, Liu Q. SELF: self-evolution with language feedback. 2023, arXiv preprint arXiv: 2310.00533

[129]

Zhang W, Shen Y, Wu L, Peng Q, Wang J, Zhuang Y, Lu W. Self-contrast: better reflection through inconsistent solving perspectives. In: Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics. 2024

[130]

Wang S, Li S, Sun T, Fu J, Cheng Q, Ye J, Ye J, Qiu X, Huang X. LLM can achieve self-regulation via hyperparameter aware generation. In: Proceedings of the Findings of the Association for Computational Linguistics. 2024

[131]

Huang H, Qu Y, Liu J, Yang M, Xu B, Zhao T, Lu W. Self-evaluation of large language model based on glass-box features. In: Proceedings of the Findings of the Association for Computational Linguistics. 2024

[132]

Huang J, Chen X, Mishra S, Zheng H S, Yu A W, Song X, Zhou D. Large language models cannot self-correct reasoning yet. In: Proceedings of the 12th International Conference on Learning Representations. 2024

[133]

Xu W, Zhu G, Zhao X, Pan L, Li L, Wang W Y. Perils of self-feedback: self-bias amplifies in large language models. 2024, arXiv preprint arXiv: 2402.11436v1

[134]

Gou Z, Shao Z, Gong Y, Shen Y, Yang Y, Duan N, Chen W. CRITIC: large language models can self-correct with tool-interactive critiquing. In: Proceedings of the 12th International Conference on Learning Representations. 2024

[135]

Zhao Z, Wallace E, Feng S, Klein D, Singh S. Calibrate before use: improving few-shot performance of language models. In: Proceedings of the 38th International Conference on Machine Learning. 2021

[136]

Si C, Gan Z, Yang Z, Wang S, Wang J, Boyd-Graber J L, Wang L. Prompting GPT-3 to be reliable. In: Proceedings of the 11th International Conference on Learning Representations. 2023

[137]

Liang P, Bommasani R, Lee T, Tsipras D, Soylu D, , . Holistic evaluation of language models. Transactions on Machine Learning Research, 2023, 2023

[138]

Ye X, Durrett G. The unreliability of explanations in few-shot prompting for textual reasoning. 2022, arXiv preprint arXiv: 2205.03401

[139]

Shaikh O, Zhang H, Held W, Bernstein M, Yang D. On second thought, let’s not think step by step! bias and toxicity in zero-shot reasoning. In: Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics. 2023

[140]

Lin S, Hilton J, Evans O. TruthfulQA: measuring how models mimic human falsehoods. In: Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics. 2022

[141]

Schick T, Schütze H. Exploiting cloze-questions for few-shot text classification and natural language inference. In: Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics. 2021

[142]

Lester B, Al-Rfou R, Constant N. The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing. 2021

[143]

Wang X, Wei J, Schuurmans D, Le Q V, Chi E H, Narang S, Chowdhery A, Zhou D. Self-consistency improves chain of thought reasoning in language models. In: Proceedings of the 11th International Conference on Learning Representations. 2023

[144]

Li Y, Lin Z, Zhang S, Fu Q, Chen B, Lou J-G, Chen W. Making language models better reasoners with step-aware verifier. In: Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics. 2023

[145]

Pitis S, Zhang M R, Wang A, Ba J. Boosted prompt ensembles for large language models. 2023, arXiv preprint arXiv: 2304.05970

[146]

Li Y, Lin Z, Zhang S, Fu Q, Chen B, Lou J-G, Chen W. On the advance of making language models better reasoners. 2022, arXiv preprint arXiv: 2206.02336

[147]

Arora S, Narayan A, Chen M F, Orr L J, Guha N, Bhatia K, Chami I, C. Ask me anything: a simple strategy for prompting language models. In: Proceedings of the 11th International Conference on Learning Representations. 2023

[148]

Hou B, O’Connor J, Andreas J, Chang S, Zhang Y. PromptBoosting: black-box text classification with ten forward passes. In: Proceedings of the 40th International Conference on Machine Learning. 2023, 542

[149]

Zhang C, Liu L, Wang C, Sun X, Wang H, Wang J, Cai M. PREFER: prompt ensemble learning via feedback-reflect-refine. In: Proceedings of the 38th AAAI Conference on Artificial Intelligence. 2024

[150]

Gao T, Fisch A, Chen D. Making pre-trained language models better few-shot learners. arXiv preprint arXiv:2012.15723, 2020

[151]

Allingham J U, Ren J, Dusenberry M W, Gu X, Cui Y, Tran D, Liu J Z, Lakshminarayanan B. A simple zero-shot prompt weighting technique to improve prompt ensembling in text-image models. In: Proceedings of the 40th International Conference on Machine Learning. 2023

[152]

Meade N, Poole-Dayan E, Reddy S. An empirical survey of the effectiveness of debiasing techniques for pre-trained language models. In: Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics. 2022

[153]

Nadeem M, Bethke A, Reddy S. StereoSet: measuring stereotypical bias in pretrained language models. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing. 2021

[154]

Liu Y, Jia Y, Geng R, Jia J, Gong N Z. Formalizing and benchmarking prompt injection attacks and defenses. In: Proceedings of the 33rd USENIX Security Symposium. 2024

[155]

Tang Z, Zhou K, Li J, Ding Y, Wang P, Yan B, Hua R, Zhang M. CMD: a framework for context-aware model self-detoxification. In: Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing. 2024

[156]

Kaddour J, Harris J, Mozes M, Bradley H, Raileanu R, McHardy R. Challenges and applications of large language models. 2023, arXiv preprint arXiv: 2307.10169

[157]

Bowman R, Cooney O, Newbold J W, Thieme A, Clark L, Doherty G, Cowan B . Exploring how politeness impacts the user experience of chatbots for mental health support. International Journal of Human-Computer Studies, 2024, 184: 103181

[158]

Glaese A, McAleese N, Trębacz M, Aslanides J, Firoiu V, Ewalds T, Rauh M, Weidinger L, Chadwick M, Thacker P, Campbell-Gillingham L, Uesato J, Huang P S, Comanescu R, Yang F, See A, Dathathri S, Greig R, Chen C, Fritz D, Elias J S, Green R, Mokrá S, Fernando N, Wu B, Foley R, Young S, Gabriel I, Isaac W, Mellor J, Hassabis D, Kavukcuoglu K, Hendricks L A, Irving G. Improving alignment of dialogue agents via targeted human judgements. 2022, arXiv preprint arXiv: 2209.14375

[159]

OpenAI. chatgpt. see the website of OpenAI 2024–5-10

[160]

OpenAI, Achiam J, Adler S, Agarwal S, Ahmad L, , . GPT-4 technical report. 2024, arXiv preprint arXiv: 2303.08774

[161]

Microsoft. copilot. See the website of copilot.microsoft.com2024-5-10

[162]

Anthropic. The claude 3 model family: opus, sonnet, Haiku. See the website of api.semanticscholar.org/CorpusID:268232499 2024

[163]

Gao L, Ma X, Lin J, Callan J. Precise zero-shot dense retrieval without relevance labels. In: Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics. 2023

[164]

Jagerman R, Zhuang H, Qin Z, Wang X, Bendersky M. Query expansion by prompting large language models. ArXiv, 2023, abs/2305.03653

[165]

Jagerman R, Zhuang H, Qin Z, Wang X, Bendersky M. Query expansion by prompting large language models. 2023, arXiv preprint arXiv: 2305.03653

[166]

Li M, Zhuang H, Hui K, Qin Z, Lin J, Jagerman R, Wang X, Bendersky M. Generate, filter, and fuse: query expansion via multi-step keyword generation for zero-shot neural rankers. 2023, arXiv preprint arXiv: 2311.09175

[167]

Wu Y, Wu W, Xing C, Zhou M, Li Z. Sequential matching network: a new architecture for multi-turn response selection in retrieval-based chatbots. In: Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics. 2017

[168]

Ziems N, Yu W, Zhang Z, Jiang M. Large language models are built-in autoregressive search engines. In: Proceedings of the Findings of the Association for Computational Linguistics. 2023

[169]

Hongcheng Liu Y M Y WY. L. Xiezhi chinese law large language model. see https://github.com/LiuHC0428/LAW_GPT website, 2023

[170]

Singhal K, Azizi S, Tu T, Mahdavi S S, Wei J, . . Large language models encode clinical knowledge. Nature, 2023, 620( 7972): 172–180

[171]

Yoo K M, Park D, Kang J, Lee S-W, Park W. GPT3Mix: leveraging large-scale language models for text augmentation. In: Proceedings of the Findings of the Association for Computational Linguistics. 2021, 2225−2239

[172]

Bonifacio L, Abonizio H, Fadaee M, Nogueira R. InPars: unsupervised dataset generation for information retrieval. In: Proceedings of the 45th International ACM SIGIR Conference on Research and Development in Information Retrieval. 2022

[173]

Jeronymo V, Bonifacio L, Abonizio H, Fadaee M, Lotufo D R, Zavrel J, Nogueira R. InPars-v2: large language models as efficient dataset generators for information retrieval. 2023, arXiv preprint arXiv: 2301.01820

[174]

Dai H, Liu Z, Liao W, Huang X, Wu Z, Zhao L, Liu W, Liu N, Li S, Zhu D, Cai H, Sun L, Li Q, Shen D, Liu T, Li X. AugGPT: leveraging ChatGPT for text data augmentation. 2023, arXiv preprint arXiv: 2302.13007

[175]

Wu S, Irsoy O, Lu S, Dabravolski V, Dredze M, Gehrmann S, Kambadur P, Rosenberg D, Mann G. BloombergGPT: a large language model for finance. 2023, arXiv preprint arXiv: 2303.17564

[176]

Yang H, Liu X-Y, Wang C D. FinGPT: open-source financial large language models. 2023, arXiv preprint arXiv: 2306.06031

[177]

Luu R K, Buehler M J . BioinspiredLLM: conversational large language model for the mechanics of biological and bio-inspired materials. Advanced Science, 2024, 11( 10): 2306724

[178]

Liu S, Wang J, Yang Y, Wang C, Liu L, Guo H, Xiao C. ChatGPT-powered conversational drug editing using retrieval and domain feedback. 2023, arXiv preprint arXiv: 2305.18090

[179]

Bran A M, Cox S, Schilter O, Baldassari C, White A D, Schwaller P . Augmenting large language models with chemistry tools. Nature Machine Intelligence, 2024, 6( 5): 525–535

[180]

Zheng Q, Xia X, Zou X, Dong Y, Wang S, Xue Y, Shen L, Wang Z, Wang A, Li Y, Su T, Yang Z, Tang J. CodeGeeX: a pre-trained model for code generation with multilingual benchmarking on HumanEval-X. In: Proceedings of the 29th ACM SIGKDD Conference on Knowledge Discovery and Data Mining. 2023

[181]

Raunak V, Sharaf A, Wang Y, Awadalla H, Menezes A. Leveraging GPT-4 for automatic translation post-editing. In: Proceedings of the Findings of the Association for Computational Linguistics. 2023

[182]

Sharma R K, Gupta V, Grossman D. Defending language models against image-based prompt attacks via user-provided specifications. In: Proceedings of 2024 IEEE Security and Privacy Workshops. 2024

[183]

Ni S, Bi K, Guo J, Cheng X. When do LLMs need retrieval augmentation? Mitigating LLMs’ overconfidence helps retrieval augmentation. In: Proceedings of the Findings of the Association for Computational Linguistics. 2024

[184]

Asai A, Wu Z, Wang Y, Sil A, Hajishirzi H. Self-RAG: learning to retrieve, generate, and critique through self-reflection. In: Proceedings of the 12th International Conference on Learning Representations. 2024

[185]

Liu Y, Peng X, Zhang X, Liu W, Yin J, Cao J, Du T. RA-ISF: learning to answer and understand from retrieval augmentation via iterative self-feedback. In: Proceedings of the Findings of the Association for Computational Linguistics. 2024

[186]

Shi Z, Zhang S, Sun W, Gao S, Ren P, Chen Z, Ren Z. Generate-then-ground in retrieval-augmented generation for multi-hop question answering. In: Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics. 2024

[187]

Xu S, Pang L, Shen H, Cheng X, Chua T-S. Search-in-the-chain: interactively enhancing large language models with search for knowledge-intensive tasks. In: Proceedings of the ACM Web Conference 2024. 2024

[188]

Zelikman E, Wu Y, Mu J, Goodman N D. STaR: bootstrapping reasoning with reasoning. In: Proceedings of the 36th International Conference on Neural Information Processing Systems. 2022

[189]

Zelikman E, Harik G, Shao Y, Jayasiri V, Haber N, Goodman N D. Quiet-STaR: language models can teach themselves to think before speaking. 2024, arXiv preprint arXiv: 2403.09629

[190]

OpenAI. o1. see https://openai.com/o1/ website2024

[191]

Schulhoff S, Ilie M, Balepur N, Kahadze K, Liu A, Si C, Li Y, Gupta A, Han H, Schulhoff S, Dulepet P S, Vidyadhara S, Ki D, Agrawal S, Pham C, Kroiz G C, Li F, Tao H, Srivastava A, Da Costa H, Gupta S, Rogers M L, Goncearenco I, Sarli G, Galynker I, Peskoff D, Carpuat M, White J, Anadkat S, Hoyle A M, Resnik P. The prompt report: a systematic survey of prompt engineering techniques. 2024, arXiv preprint arXiv: 2406.06608

[192]

Yoran O, Wolfson T, Ram O, Berant J. Making retrieval-augmented language models robust to irrelevant context. arXiv preprint arXiv:2310.01558, 2023

[193]

Microsoft. GraphRAG. See the website of www.microsoft.com/en-us/research/project/graphrag/, 2024

[194]

Sun W, Cai H, Chen H, Ren P, Chen Z, de Rijke M, Ren Z. Answering ambiguous questions via iterative prompting. In: Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics. 2023, 7669−7683

[195]

Paulus A, Zharmagambetov A, Guo C, Amos B, Tian Y. AdvPrompter: fast adaptive adversarial prompting for LLMs. 2024, arXiv preprint arXiv: 2404.16873

[196]

Kepel D, Valogianni K. Autonomous prompt engineering in large language models. 2024, arXiv preprint arXiv: 2407.11000

RIGHTS & PERMISSIONS

The Author(s) 2025. This article is published with open access at link.springer.com and journal.hep.com.cn

AI Summary AI Mindmap
PDF (2957KB)

Supplementary files

Highlights

1534

Accesses

0

Citation

Detail

Sections
Recommended

AI思维导图

/