Large language models meet NLP: a survey

Libo QIN , Qiguang CHEN , Xiachong FENG , Yang WU , Yongheng ZHANG , Yinghui LI , Min LI , Wanxiang CHE , Philip S. YU

Front. Comput. Sci. ›› 2026, Vol. 20 ›› Issue (11) : 2011361

PDF (2632KB)
Front. Comput. Sci. ›› 2026, Vol. 20 ›› Issue (11) : 2011361 DOI: 10.1007/s11704-025-50472-3
Artificial Intelligence
REVIEW ARTICLE

Large language models meet NLP: a survey

Author information +
History +
PDF (2632KB)

Abstract

While large language models (LLMs) like ChatGPT have shown impressive capabilities in Natural Language Processing (NLP) tasks, a systematic investigation of their potential in this field remains largely unexplored. This study aims to address this gap by exploring the following questions. (1) How are LLMs currently applied to NLP tasks in the literature? (2) Have traditional NLP tasks already been solved with LLMs? (3) What is the future of the LLMs for NLP? To answer these questions, we take the first step to provide a comprehensive overview of LLMs in NLP. Specifically, we first introduce a unified taxonomy including (1) parameter-frozen paradigm and (2) parameter-tuning paradigm to offer a unified perspective for understanding the current progress of LLMs in NLP. Furthermore, we summarize the new frontiers and the corresponding challenges, aiming to inspire further groundbreaking advancements. We hope this work offers valuable insights into {the potential and limitations} of LLMs, while also serving as a practical guide for building effective LLMs in NLP.

Graphical abstract

Keywords

natural language processing / large language models / parameter-frozen paradigm / parameter-tuning paradigm / ChatGPT

Cite this article

Download citation ▾
Libo QIN, Qiguang CHEN, Xiachong FENG, Yang WU, Yongheng ZHANG, Yinghui LI, Min LI, Wanxiang CHE, Philip S. YU. Large language models meet NLP: a survey. Front. Comput. Sci., 2026, 20(11): 2011361 DOI:10.1007/s11704-025-50472-3

登录浏览全文

4963

注册一个新账户 忘记密码

References

[1]

Zhao WX, Zhou K, Li J, Tang T, Wang X, , . A survey of large language models. 2023, arXiv preprint arXiv: 2303.18223

[2]

Kaddour J, Harris J, Mozes M, Bradley H, Raileanu R, McHardy R. Challenges and applications of large language models. 2023, arXiv preprint arXiv: 2307.10169

[3]

Yang J, Jin H, Tang R, Han X, Feng Q, Jiang H, Zhong S, Yin B, Hu X . Harnessing the power of LLMs in practice: a survey on chatgpt and beyond. ACM Transactions on Knowledge Discovery from Data, 2024, 18( 6): 160

[4]

Hadi MU, Al Tashi Q, Qureshi R, Shah A, Muneer A, Irfan M, Zafar A, Shaikh MB, Akhtar N, Hassan SZ, Shoman M, Wu J, Mirjalili S, Shah M. Large language models: a comprehensive survey of its applications, challenges, limitations, and future prospects. 2023, TechRxiv

[5]

Zhuang Z, Chen Q, Ma L, Li M, Han Y, Qian Y, Bai H, Zhang W, Liu T. Through the lens of core competency: survey on evaluation of large language models. In: Proceedings of the 22nd Chinese National Conference on Computational Linguistics (Volume 2: Frontier Forum). 2023, 88–109

[6]

Georgiev P, Lei VI, Burnell R, Bai L, Gulati A, , . Gemini 1.5: unlocking multimodal understanding across millions of tokens of context. 2024, arXiv preprint arXiv: 2403.05530

[7]

Guo D, Yang D, Zhang H, Song J, Zhang R, , . DeepSeek-R1: incentivizing reasoning capability in LLMs via reinforcement learning. 2025, arXiv preprint arXiv: 2501.12948

[8]

Chen Z, Wu J, Wang W, Su W, Chen G, Xing S, Zhong M, Zhang Q, Zhu X, Lu L, Li B, Luo P, Lu T, Qiao Y, Dai J. Intern VL: scaling up vision foundation models and aligning for generic visual-linguistic tasks. In: Proceedings of 2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2024, 24185–24198

[9]

Chen Q, Yang M, Qin L, Liu J, Yan Z, Guan J, Peng D, Ji Y, Li H, Hu M, Zhang Y, Liang Y, Zhou Y, Wang J, Chen Z, Che W. AI4Research: a survey of artificial intelligence for scientific research. 2025, arXiv preprint arXiv: 2507.01903

[10]

Brown TB, Mann B, Ryder N, Subbiah M, Kaplan JD, , . Language models are few-shot learners. In: Proceedings of the 34th International Conference on Neural Information Processing Systems. 2020, 159

[11]

Ouyang L, Wu J, Jiang X, Almeida D, Wainwright CL, Mishkin P, Zhang C, Agarwal S, Slama K, Ray A, Schulman J, Hilton J, Kelton F, Miller L, Simens M, Askell A, Welinder P, Christiano P, Leike J, Lowe R. Training language models to follow instructions with human feedback. In: Proceedings of the 36th International Conference on Neural Information Processing Systems. 2022, 2011

[12]

Chowdhery A, Narang S, Devlin J, Bosma M, Mishra G, . . PaLM: scaling language modeling with pathways. The Journal of Machine Learning Research, 2023, 24( 1): 240

[13]

Zhang S, Roller S, Goyal N, Artetxe M, Chen M, Chen S, Dewan C, Diab M, Li X, Lin XV, Mihaylov T, Ott M, Shleifer S, Shuster K, Simig D, Koura PS, Sridhar A, Wang T, Zettlemoyer L. OPT: open pre-trained transformer language models. 2022, arXiv preprint arXiv: 2205.01068

[14]

Touvron H, Lavril T, Izacard G, Martinet X, Lachaux MA, Lacroix T, Rozière B, Goyal N, Hambro E, Azhar F, Rodriguez A, Joulin A, Grave E, Lample G. LLaMA: open and efficient foundation language models. 2023, arXiv preprint arXiv: 2302.13971

[15]

Wei J, Bosma M, Zhao V, Guu K, Yu AW, Lester B, Du N, Dai AM, Le QV. Finetuned language models are zero-shot learners. In: Proceedings of the 10th International Conference on Learning Representations. 2022

[16]

Wei J, Wang X, Schuurmans D, Bosma M, Ichter B, Xia F, Chi EH, Le QV, Zhou D. Chain-of-thought prompting elicits reasoning in large language models. In: Proceedings of the 36th International Conference on Neural Information Processing Systems. 2022, 1800

[17]

Min S, Lyu X, Holtzman A, Artetxe M, Lewis M, Hajishirzi H, Zettlemoyer L. Rethinking the role of demonstrations: what makes in-context learning work? In: Proceedings of 2022 Conference on Empirical Methods in Natural Language Processing. 2022, 11048–11064

[18]

Wei J, Tay Y, Bommasani R, Raffel C, Zoph B, Borgeaud S, Yogatama D, Bosma M, Zhou D, Metzler D, Chi EH, Hashimoto T, Vinyals O, Liang P, Dean J, Fedus W. Emergent abilities of large language models. Transactions on Machine Learning Research, 2022, 2022

[19]

Chen Q, Qin L, Wang J, Zhou J, Che W. Unlocking the capabilities of thought: a reasoning boundary framework to quantify and optimize chain-of-thought. In: Proceedings of the 38th International Conference on Neural Information Processing Systems. 2024, 1740

[20]

Wang J, Liang Y, Meng F, Zou B, Li Z, Qu J, Zhou J. Zero-shot cross-lingual summarization via large language models. In: Proceedings of the 4th New Frontiers in Summarization Workshop. 2023, 12–23

[21]

Wang Y, Zhang Z, Wang R. Element-aware summarization with large language models: expert-aligned evaluation and chain-of-thought method. In: Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). 2023, 8640–8665

[22]

Wang L, Lyu C, Ji T, Zhang Z, Yu D, Shi S, Tu Z. Document-level machine translation with large language models. In: Proceedings of 2023 Conference on Empirical Methods in Natural Language Processing. 2023, 16646–16661

[23]

Peng K, Ding L, Zhong Q, Shen L, Liu X, Zhang M, Ouyang Y, Tao D. Towards making the most of ChatGPT for machine translation. In: Proceedings of Findings of the Association for Computational Linguistics: EMNLP 2023. 2023, 5622–5633

[24]

Wei X, Cui X, Cheng N, Wang X, Zhang X, Huang S, Xie P, Xu J, Chen Y, Zhang M, Jiang Y, Han W. Zero-shot information extraction via chatting with ChatGPT. 2023, arXiv preprint arXiv: 2302.10205

[25]

Wan Z, Cheng F, Mao Z, Liu Q, Song H, Li J, Kurohashi S. GPT-RE: in-context learning for relation extraction using large language models. In: Proceedings of 2023 Conference on Empirical Methods in Natural Language Processing. 2023, 3534–3547

[26]

Huang JT, Lam MH, Li EJ, Ren S, Wang W, Jiao W, Tu Z, Lyu MR. Emotionally numb or empathetic? Evaluating how LLMs feel using EmotionBench. 2023, arXiv preprint arXiv: 2308.03656

[27]

Wang Z, Xie Q, Ding Z, Feng Y, Xia R. Is ChatGPT a good sentiment analyzer? A preliminary study. 2023, arXiv preprint arXiv: 2304.04339

[28]

Chen Q, Qin L, Liu J, Peng D, Guan J, Wang P, Hu M, Zhou Y, Gao T, Che W. Towards reasoning era: a survey of long chain-of-thought for reasoning large language models. 2025, arXiv preprint arXiv: 2503.09567

[29]

Zhang Y, Chen Q, Li M, Che W, Qin L. AutoCAP: towards automatic cross-lingual alignment planning for zero-shot chain-of-thought. In: Proceedings of Findings of the Association for Computational Linguistics: ACL 2024. 2024, 9191–9200

[30]

Ren L, Liu Y, Ouyang C, Yu Y, Zhou S, He Y, Wan Y . DyLas: a dynamic label alignment strategy for large-scale multi-label text classification. Information Fusion, 2025, 120: 103081

[31]

Kojima T, Gu SS, Reid M, Matsuo Y, Iwasawa Y. Large language models are zero-shot reasoners. In: Proceedings of the 36th International Conference on Neural Information Processing Systems. 2022, 1613

[32]

Houlsby N, Giurgiu A, Jastrzebski S, Morrone B, De Laroussilhe Q, Gesmundo A, Attariyan M, Gelly S. Parameter-efficient transfer learning for NLP. In: Proceedings of the 36th International Conference on Machine Learning. 2019, 2790–2799

[33]

Hu EJ, Shen Y, Wallis P, Allen-Zhu Z, Li Y, Wang S, Wang L, Chen W. LoRA: low-rank adaptation of large language models. In: Proceedings of the 10th International Conference on Learning Representations. 2022

[34]

Li XL, Liang P. Prefix-tuning: optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers). 2021, 4582–4597

[35]

Dettmers T, Pagnoni A, Holtzman A, Zettlemoyer L. QLORA: efficient finetuning of quantized LLMs. In: Proceedings of the 37th International Conference on Neural Information Processing Systems., 2023, 441

[36]

Mundra N, Doddapaneni S, Dabre R, Kunchukuttan A, Puduppully R, Khapra MM. A comprehensive analysis of adapter efficiency. In: Proceedings of the 7th Joint International Conference on Data Science & Management of Data (11th ACM IKDD CODS and 29th COMAD). 2024, 136–154

[37]

Wankhade M, Chandra Sekhara Rao A, Kulkarni C . A survey on sentiment analysis methods, applications, and challenges. Artificial Intelligence Review, 2022, 55( 7): 5731–5780

[38]

Belkhir A, Sadat F. Beyond information: is ChatGPT empathetic enough? In Proceedings of the 14th International Conference on Recent Advances in Natural Language Processing. 2023, 159–169

[39]

Zhang W, Deng Y, Liu B, Pan SJ, Bing L. Sentiment analysis in the era of large language models: a reality check. In: Proceedings of Findings of the Association for Computational Linguistics: NAACL 2024. 2024, 3881–3906

[40]

Koto F, Beck T, Talat Z, Gurevych I, Baldwin T. Zero-shot sentiment analysis in low-resource languages using a multilingual sentiment lexicon. In: Proceedings of the 18th Conference of the European Chapter of the Association for Computational Linguistics (Volume 1: Long Papers). 2024, 298–320

[41]

Du K, Xing F, Mao R, Cambria E. An evaluation of reasoning capabilities of large language models in financial sentiment analysis. In: Proceedings of 2024 IEEE Conference on Artificial Intelligence (CAI). 2024, 189–194

[42]

Zhao W, Zhao Y, Lu X, Wang S, Tong Y, Qin B. Is ChatGPT equipped with emotional dialogue capabilities? 2023, arXiv preprint arXiv: 2304.09582

[43]

Xu X, Zhang JD, Xiao R, Xiong L. The limits of chatgpt in extracting aspect-category-opinion-sentiment quadruples: a comparative analysis. 2023, arXiv preprint arXiv: 2310.06502

[44]

Lu Y, Ji Z, Du J, Shanqing Y, Xuan Q, Zhou T. From LLM-anation to LLM-orchestrator: coordinating small models for data labeling. 2025, arXiv preprint arXiv: 2506.16393

[45]

Sun X, Li X, Zhang S, Wang S, Wu F, Li J, Zhang T, Wang G. Sentiment analysis through LLM negotiations. 2023, arXiv preprint arXiv: 2311.01876

[46]

Zhang T, Irsan IC, Thung F, Lo D . Revisiting sentiment analysis for software engineering in the era of large language models. ACM Transactions on Software Engineering and Methodology, 2025, 34( 3): 60

[47]

Zhang K, Gutierrez BJ, Su Y. Aligning instruction tasks unlocks large language models as zero-shot relation extractors. In: Proceedings of Findings of the Association for Computational Linguistics: ACL 2023. 2023, 794–812

[48]

Xie T, Li Q, Zhang J, Zhang Y, Liu Z, Wang H. Empirical study of zero-shot NER with ChatGPT. In: Proceedings of 2023 Conference on Empirical Methods in Natural Language Processing. 2023, 7935–7956, doi: 10.18653/v1/2023.emnlp-main.493

[49]

Li M, Zhang R. How far is language model from 100% few-shot named entity recognition in medical domain. 2023, arXiv preprint arXiv: 2307.00186

[50]

Li P, Sun T, Tang Q, Yan H, Wu Y, Huang X, Qiu X. CodeIE: large code generation models are better few-shot information extractors. In: Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)., 2023, 15339–15353

[51]

Bi Z, Chen J, Jiang Y, Xiong F, Guo W, Chen H, Zhang N. CodeKGC: code language model for generative knowledge graph construction. 2023, arXiv preprint arXiv: 2304.09048

[52]

Fornasiere R, Brunello N, Scotti V, Carman MJ. Medical information extraction with large language models. In: Proceedings of the 7th International Conference on Natural Language and Speech Processing (ICNLSP 2024). 2024, 456–466

[53]

Tang Y, Xiao Z, Li X, Fang Q, Zhang Q, Yee Tak Fong D, Tsz Tsun Lai F, Sze Ling Chui C, Wai Yin Chan E, Chi Kei Wong I. Large language model in medical information extraction from titles and abstracts with prompt engineering strategies: a comparative study of GPT-3.5 and GPT-4. 2024, MedRxiv

[54]

Pan W, Chen Q, Xu X, Che W, Qin L. A preliminary evaluation of chatgpt for zero-shot dialogue understanding. 2023, arXiv preprint arXiv: 2304.04256

[55]

He M, Garner PN. Can ChatGPT detect intent? Evaluating large language models for spoken language understanding. In: Proceedings of the 24th Annual Conference of the International Speech Communication Association., 2023, 1109–1113

[56]

Hudeček V, Dušek O. Are LLMs all you need for task-oriented dialogue? 2023, arXiv preprint arXiv: 2304.06556

[57]

Heck M, Lubis N, Ruppik B, Vukovic R, Feng S, Geishauser C, Lin HC, van Niekerk C, Gašić M. ChatGPT for zero-shot dialogue state tracking: a solution or an opportunity? In: Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers). 2023, 936–950

[58]

Gao H, Lin TE, Li H, Yang M, Wu Y, Ma W, Huang F, Li Y. Self-explanation prompting improves dialogue understanding in large language models. In: Proceedings of 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024). 2024, 14567–14578

[59]

Li Z, Chen W, Li S, Wang H, Qian J, Yan X. Controllable dialogue simulation with in-context learning. In: Proceedings of Findings of the Association for Computational Linguistics: EMNLP 2022. 2022, 4330–4347

[60]

Zhang Y, Yang J, Yu K, Dai Y, Storks S, Bao Y, Pan J, Devraj N, Ma Z, Chai J. SEAGULL: an embodied agent for instruction following through situated dialog. 2023

[61]

Zhang X, Peng B, Li K, Zhou J, Meng H. SGP-TOD: building task bots effortlessly via schema-guided LLM prompting. In: Proceedings of Findings of the Association for Computational Linguistics: EMNLP 2023. 2023, 13348–13369

[62]

Wu Y, Dong G, Xu W. Semantic parsing by large language models for intricate updating strategies of zero-shot dialogue state tracking. In: Proceedings of Findings of the Association for Computational Linguistics: EMNLP 2023. 2023, 11093–11099

[63]

Snigdha Sarathi Das S, Shah C, Wan M, Neville J, Yang L, Andersen R, Buscher G, Safavi T. S3-DST: structured open-domain dialogue segmentation and state tracking in the era of LLMs. In: Proceedings of Findings of the Association for Computational Linguistics: ACL 2024. 2024, 14996–15014

[64]

Chi RA, Kim J, Hickmann S, Li S, Chi G, Atchariyachanvanit T, Yu K, Chi NA, Dai G, Rammoorthy S, Wang JH, Sarthi P, Adams V, Xu BY, Xu BZ, Park K, Cao S, Manning CD. Dialogue distillery: crafting interpolable, interpretable, and introspectable dialogue from LLMs. 2023

[65]

Hu Y, Lee CH, Xie T, Yu T, Smith NA, Ostendorf M. In-context learning for few-shot dialogue state tracking. In: Proceedings of Findings of the Association for Computational Linguistics: EMNLP 2022. 2022, 2627–2643

[66]

King B, Flanigan J. Diverse retrieval-augmented in-context learning for dialogue state tracking. In: Proceedings of Findings of the Association for Computational Linguistics: ACL 2023. 2023, 5570–5585

[67]

Addlesee A, Sieińska W, Gunson N, Garcia DH, Dondrup C, Lemon O. Multi-party goal tracking with LLMs: comparing pre-training, fine-tuning, and prompt engineering. In: Proceedings of the 24th Annual Meeting of the Special Interest Group on Discourse and Dialogue. 2023, 229–241

[68]

Chung W, Cahyawijaya S, Wilie B, Lovenia H, Fung P. InstructTODS: large language models for end-to-end task-oriented dialogue systems. In: Proceedings of the 2nd Workshop on Natural Language Interfaces. 2023, 1–21

[69]

Lee CH, Cheng H, Ostendorf M. OrchestraLLM: efficient orchestration of language models for dialogue state tracking. In: Proceedings of 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers). 2024, 1434–1445

[70]

Lin E, Hale J, Gratch J. Toward a better understanding of the emotional dynamics of negotiation with large language models. In: Proceedings of the 24th International Symposium on Theory, Algorithmic Foundations, and Protocol Design for Mobile Networks and Mobile Computing. 2023, 545–550

[71]

Cao L. DiagGPT: an LLM-based chatbot with automatic topic management for task-oriented dialogue. 2023, arXiv preprint arXiv: 2308.08043

[72]

Singha A, Cambronero J, Gulwani S, Le V, Parnin C. Tabular representation, noisy operators, and impacts on table structure understanding tasks in LLMs. In: Proceedings of NeurIPS 2023 Second Table Representation Learning Workshop. 2023

[73]

Patnaik S, Changwal H, Aggarwal M, Bhatia S, Kumar Y, Krishnamurthy B. CABINET: content relevance-based noise reduction for table question answering. In: Proceedings of the 12th International Conference on Learning Representations. 2024

[74]

Ye Y, Hui B, Yang M, Li B, Huang F, Li Y. Large language models are versatile decomposers: decomposing evidence and questions for table-based reasoning. In: Proceedings of the 46th International ACM SIGIR Conference on Research and Development in Information Retrieval. 2023, 174–184

[75]

Ye J, Du M, Wang G. Dataframe QA: a universal LLM framework on dataframe question answering without data exposure. In: Proceedings of the 16th Asian Conference on Machine Learning. 2025, 575–590

[76]

Sui Y, Zhou M, Zhou M, Han S, Zhang D. GPT4Table: can large language models understand structured table data? A benchmark and empirical study. 2023, arXiv preprint arXiv: 2305.13062

[77]

Sui Y, Zou J, Zhou M, He X, Du L, Han S, Zhang D. TAP4LLM: table provider on sampling, augmenting, and packing semi-structured data for large language model reasoning. In: Proceedings of Findings of the Association for Computational Linguistics: EMNLP 2024. 2024, 10306–10323

[78]

Cheng Z, Xie T, Shi P, Li C, Nadkarni R, Hu Y, Xiong C, Radev D, Ostendorf M, Zettlemoyer L, Smith NA, Yu T. Binding language models in symbolic languages. In: Proceedings of the 11th International Conference on Learning Representations. 2023

[79]

Zhang W, Shen Y, Lu W, Zhuang Y. Data-copilot: bridging billions of data and humans with autonomous workflow. 2023, arXiv preprint arXiv: 2306.07209

[80]

Zhang Z, Li X, Gao Y, Lou JG. CRT-QA: a dataset of complex reasoning question answering over tabular data. In: Proceedings of 2023 Conference on Empirical Methods in Natural Language Processing. 2023, 2131–2153, doi: 10.18653/v1/2023.emnlp-main.132

[81]

Zhang Y, Henkel J, Floratou A, Cahoon J, Deep S, Patel JM . ReAcTable: enhancing react for table question answering. Proceedings of the VLDB Endowment, 2024, 17( 8): 1981–1994

[82]

Zhang H, Si Q, Fu P, Lin Z, Wang W. Are large language models table-based fact-checkers? In: Proceedings of the 27th International Conference on Computer Supported Cooperative Work in Design. 2024, 3086–3091

[83]

Chen W. Large language models are few(1)-shot table reasoners. In: Proceedings of Findings of the Association for Computational Linguistics: EACL 2023. 2023, 1120–1130

[84]

Luo T, Lei F, Lei J, Liu W, He S, Zhao J, Liu K. HRoT: hybrid prompt strategy and retrieval of thought for table-text hybrid question answering. 2023, arXiv preprint arXiv: 2309.12669

[85]

Li H, Su J, Chen Y, Li Q, Zhang Z. SheetCopilot: bringing software productivity to the next level through large language models. In: Proceedings of the 37th International Conference on Neural Information Processing Systems. 2023, 220

[86]

Jiang J, Zhou K, Dong Z, Ye K, Zhao X, Wen JR. StructGPT: a general framework for large language model to reason over structured data. In: Proceedings of 2023 Conference on Empirical Methods in Natural Language Processing. 2023, 9237–9251, doi: 10.18653/v1/2023.emnlp-main.574

[87]

Wang Z, Zhang H, Li CL, Eisenschlos JM, Perot V, Wang Z, Miculicich L, Fujii Y, Shang J, Lee CY, Pfister T. Chain-of-table: evolving tables in the reasoning chain for table understanding. In: Proceedings of the 12th International Conference on Learning Representations. 2024

[88]

Kong K, Zhang J, Shen Z, Srinivasan B, Lei C, Faloutsos C, Rangwala H, Karypis G. OpenTab: advancing large language models as open-domain table reasoners. In: Proceedings of the 12th International Conference on Learning Representations. 2024

[89]

Goyal T, Li JJ, Durrett G. News summarization and evaluation in the era of GPT-3. 2022, arXiv preprint arXiv: 2209.12356

[90]

Ravaut M, Sun A, Chen NF, Joty S. On context utilization in summarization with large language models. In: Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). 2023, 2764–2781

[91]

Bhaskar A, Fabbri AR, Durrett G. Prompted opinion summarization with GPT-3.5. In: Proceedings of Findings of the Association for Computational Linguistics: ACL 2023. 2023, 9282–9300

[92]

Zhang T, Ladhak F, Durmus E, Liang P, McKeown K, Hashimoto TB . Benchmarking large language models for news summarization. Transactions of the Association for Computational Linguistics, 2023, 12: 39–57

[93]

Zhang H, Liu X, Zhang J. Extractive summarization via chatgpt for faithful summary generation. In: Proceedings of Findings of the Association for Computational Linguistics: EMNLP 2023. 2023, 3270–3278

[94]

Adams G, Fabbri A, Ladhak F, Lehman E, Elhadad N. From sparse to dense: GPT-4 summarization with chain of density prompting. 2023, arXiv preprint arXiv: 2309.04269

[95]

Tang Y, Puduppully R, Liu Z, Chen N. In-context learning of large language models for controlled dialogue summarization: a holistic benchmark and empirical analysis. In: Proceedings of the 4th New Frontiers in Summarization Workshop. 2023, 56–67, doi: 10.18653/v1/2023.newsum-1.6

[96]

Chen M, Tworek J, Jun H, Yuan Q, Ponde de Oliveira Pinto H, , . Evaluating large language models trained on code. 2021, arXiv preprint arXiv: 2107.03374

[97]

Nijkamp E, Pang B, Hayashi H, Tu L, Wang H, Zhou Y, Savarese S, Xiong C. CodeGen: an open large language model for code with multi-turn program synthesis. In: Proceedings of the 11th International Conference on Learning Representations. 2023

[98]

Christopoulou F, Lampouras G, Gritta M, Zhang G, Guo Y, , . PanGu-coder: program synthesis with function-level language modeling. 2022, arXiv preprint arXiv: 2207.11280

[99]

Luo Z, Xu C, Zhao P, Sun Q, Geng X, Hu W, Tao C, Ma J, Lin Q, Jiang D. WizardCoder: Empowering code large language models with evol-instruct. In: Proceedings of the 12th International Conference on Learning Representations. 2024

[100]

Allal LB, Li R, Kocetkov D, Mou C, Akiki C, et al. SantaCoder: don’t reach for the stars! 2023, arXiv preprint arXiv: 2301.03988

[101]

Li R, Ben Allal L, Zi Y, Muennighoff N, Kocetkov D, et al. StarCoder: may the source be with you! Transactions on Machine Learning Research, 2023, 2023

[102]

Li Y, Bubeck S, Eldan R, Del Giorno A, Gunasekar S, Lee YT. Textbooks are all you need II: phi-1.5 technical report. 2023, arXiv preprint arXiv: 2309.05463

[103]

Guo D, Zhu Q, Yang D, Xie Z, Dong K, Zhang W, Chen G, Bi X, Wu Y, Li YK, Luo F, Xiong Y, Liang W. DeepSeek-coder: when the large language model meets programming — the rise of code intelligence. 2024, arXiv preprint arXiv: 2401.14196

[104]

Roziere B, Gehring J, Gloeckle F, Sootla S, Gat I, , . Code llama: open foundation models for code. 2023, arXiv preprint arXiv: 2308.12950

[105]

Zheng Q, Xia X, Zou X, Dong Y, Wang S, Xue Y, Wang Z, Shen L, Wang A, Li Y, Su T, Yang Z, Tang J. CodeGeeX: a pre-trained model for code generation with multilingual evaluations on HumanEval-X. 2023, arXiv preprint arXiv: 2303.17568

[106]

Wei X, Wei H, Lin H, Li T, Zhang P, Ren X, Li M, Wan Y, Cao Z, Xie B, Hu T, Li S, Hui B, Yu B, Liu D, Yang B, Huang F, Xie J. PolyLM: an open source polyglot large language model. 2023, arXiv preprint arXiv: 2307.06018

[107]

Zhu W, Lv Y, Dong Q, Yuan F, Xu J, Huang S, Kong L, Chen J, Li L. Extrapolating large language models to non-english by aligning languages. 2023, arXiv preprint arXiv: 2308.04948

[108]

Li C, Liu M, Zhang H, Chen Y, Xu J, Zhou M. MT2: towards a multi-task machine translation model with translation-specific in-context learning. In: Proceedings of 2023 Conference on Empirical Methods in Natural Language Processing. 2023, 8616–8627

[109]

Li J, Tang Z, Ding Y, Wang P, Guo P, You W, Qiao D, Chen W, Fu G, Zhu Q, Zhou G, Zhang M. OpenBA: an open-sourced 15B bilingual asymmetric seq2seq model pre-trained from scratch. 2023, arXiv preprint arXiv: 2309.10706

[110]

Alves DM, Guerreiro NM, Alves J, Pombal J, Rei R, de Souza J, Colombo P, Martins A. Steering large language models for machine translation with finetuning and in-context learning. In: Proceedings of Findings of the Association for Computational Linguistics: EMNLP 2023. 2023, 11127–11148

[111]

Raunak V, Awadalla HH, Menezes A. Dissecting in-context learning of translations in GPTs. 2023, arXiv preprint arXiv: 2310.15987

[112]

Lu H, Yang H, Huang H, Zhang D, Lam W, Wei F. Chain-of-dictionary prompting elicits translation in large language models. In: Proceedings of 2024 Conference on Empirical Methods in Natural Language Processing. 2024, 958–976

[113]

Zhang Z, Zhang A, Li M, Smola A. Automatic chain of thought prompting in large language models. In: Proceedings of the 11th International Conference on Learning Representations. 2023

[114]

Wang X, Wei J, Schuurmans D, Le QV, Chi EH, Narang S, Chowdhery A, Zhou D. Self-consistency improves chain of thought reasoning in language models. In: Proceedings of the 11th International Conference on Learning Representations. 2023

[115]

Lu P, Qiu L, Chang KW, Wu YN, Zhu SC, Rajpurohit T, Clark P, Kalyan A. Dynamic prompt learning via policy gradient for semi-structured mathematical reasoning. In: Proceedings of the 11th International Conference on Learning Representations. 2023

[116]

Gao L, Madaan A, Zhou S, Alon U, Liu P, Yang Y, Callan J, Neubig G. PAL: program-aided language models. In: Proceedings of the 40th International Conference on Machine Learning. 2023, 10764–10799

[117]

Das D, Banerjee D, Aditya S, Kulkarni A. MATHSENSEI: a tool-augmented large language model for mathematical reasoning. In: Proceedings of 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers). 2024, 942–966

[118]

Wang Z, Xia R, Yu J. UnifiedABSA: a unified ABSA framework based on multi-task instruction tuning. 2022, arXiv preprint arXiv: 2211.10986

[119]

Varia S, Wang S, Halder K, Vacareanu R, Ballesteros M, Benajiba Y, John NA, Anubhai R, Muresan S, Roth D. Instruction tuning for few-shot aspect-based sentiment analysis. In: Proceedings of the 13th Workshop on Computational Approaches to Subjectivity, Sentiment, & Social Media Analysis. 2023, 19–27

[120]

Yang B, Li J. Visual elements mining as prompts for instruction learning for target-oriented multimodal sentiment classification. In: Proceedings of Findings of the Association for Computational Linguistics: EMNLP 2023. 2023, 6062–6075

[121]

Zhao J, Liu K, Xu L . Sentiment analysis: mining opinions, sentiments, and emotions. Computational Linguistics, 2016, 42( 3): 595–598

[122]

Qiu H, He H, Zhang S, Li A, Lan Z. SMILE: single-turn to multi-turn inclusive language expansion via ChatGPT for mental health support. In: Proceedings of Findings of the Association for Computational Linguistics: EMNLP 2024. 2024, 615–636

[123]

Lu D, Ran S, Tetreault J, Jaimes A. Event extraction as question generation and answering. In: Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers). 2023, 1666–1688

[124]

Gan C, Zhang Q, Mori T. GIELLM: Japanese general information extraction large language model utilizing mutual reinforcement effect. 2023, arXiv preprint arXiv: 2311.06838

[125]

Sainz O, García-Ferrero I, Agerri R, Lopez de Lacalle O, Rigau G, Agirre E. GoLLIE: annotation guidelines improve zero-shot information-extraction. In: Proceedings of the 12th International Conference on Learning Representations. 2024

[126]

Wang X, Zhou W, Zu C, Xia H, Chen T, Zhang Y, Zheng R, Ye J, Zhang Q, Gui T, Kang J, Yang J, Li S, Du C. InstructUIE: multi-task instruction tuning for unified information extraction. 2023, arXiv preprint arXiv: 2304.08085

[127]

Snigdha Sarathi Das S, Zhang RH, Shi P, Yin W, Zhang R. Unified low-resource sequence labeling by sample-aware dynamic sparse finetuning. In: Proceedings of 2023 Conference on Empirical Methods in Natural Language Processing. 2023, 6998–7010, doi: 10.18653/v1/2023.emnlp-main.433

[128]

Liang Z, Wei F, Jie Y, Qian Y, Hao Z, Han B. Prompts can play lottery tickets well: achieving lifelong information extraction via lottery prompt tuning. In: Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). 2023, 277–292, doi: 10.18653/v1/2023.acl-long.16

[129]

Dagdelen J, Dunn A, Lee S, Walker N, Rosen AS, Ceder G, Persson KA, Jain A . Structured information extraction from scientific text with large language models. Nature Communications, 2024, 15( 1): 1418

[130]

Xue L, Zhang D, Dong Y, Tang J. AutoRE: document-level relation extraction with large language models. In: Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 3: System Demonstrations). 2024, 211–220

[131]

Rixewa D, Anderson K, Dubois L, Harrington M. Interleaved multi-modal document representations for large-scale information retrieval using large language models. 2024

[132]

Xie T, Wu CH, Shi P, Zhong R, Scholak T, et al. UnifiedSKG: unifying and multi-tasking structured knowledge grounding with text-to-text language models. In: Proceedings of 2022 Conference on Empirical Methods in Natural Language Processing. 2022, 602–631

[133]

Zhao J, Gupta R, Cao Y, Yu D, Wang M, Lee H, Rastogi A, Shafran I, Wu Y. Description-driven task-oriented dialog modeling. 2022, arXiv preprint arXiv: 2201.08904

[134]

Gupta R, Lee H, Zhao J, Cao Y, Rastogi A, Wu Y. Show, don’t tell: demonstrations outperform descriptions for schema-guided task-oriented dialogue. In: Proceedings of 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. 2022, 4541–4549

[135]

Yu D, Wang M, Cao Y, El Shafey L, Shafran I, Soltau H. Knowledge-grounded dialog state tracking. In: Proceedings of Findings of the Association for Computational Linguistics: EMNLP 2022. 2022, 3428–3435

[136]

Feng Y, Lu Z, Liu B, Zhan L, Wu XM. Towards LLM-driven dialogue state tracking. In: Proceedings of 2023 Conference on Empirical Methods in Natural Language Processing. 2023, 739–755

[137]

Liu H, Cai Y, Zhou Y, Ou Z, Huang Y, Feng J. Prompt pool based class-incremental continual learning for dialog state tracking. In: Proceedings of 2023 IEEE Automatic Speech Recognition and Understanding Workshop (ASRU). 2023, 1–8

[138]

Li P, He Y, Yashar D, Cui W, Ge S, Zhang H, Fainman DR, Zhang D, Chaudhuri S. Table-GPT: table-tuned GPT for diverse table tasks. 2023, arXiv preprint arXiv: 2310.09263

[139]

Xue S, Jiang C, Shi W, Cheng F, Chen K, Yang H, Zhang Z, He J, Zhang H, Wei G, Zhao W, Zhou F, Qi D, Yi H, Liu S, Chen F. DB-GPT: empowering database interactions with private large language models. 2023, arXiv preprint arXiv: 2312.17449

[140]

Zhang H, Dong Y, Xiao C, Oyamada M. Jellyfish: a large language model for data preprocessing. 2023, arXiv preprint arXiv: 2312.01678

[141]

Zhu F, Liu Z, Feng F, Wang C, Li M, Chua TS. TAT-LLM: a specialized language model for discrete reasoning over tabular and textual data. 2024, arXiv preprint arXiv: 2401.13223

[142]

Bai F, Kang J, Stanovsky G, Freitag D, Ritter A. Schema-driven information extraction from heterogeneous tables. In: Proceedings of Findings of the Association for Computational Linguistics: EMNLP 2024. 2024, 10252–10273

[143]

Zhang T, Yue X, Li Y, Sun H. TableLlama: towards open large generalist models for tables. In: Proceedings of 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers). 2024, 6024–6044

[144]

He X, Liu Y, Zhou M, He Y, Dong H, Han S, Yuan Z, Zhang D. TableLoRA: low-rank adaptation on table structure understanding for large language models. In: Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). 2025, 22376–22391

[145]

Li P, He Y, Yashar D, Cui W, Ge S, Zhang H, Fainman DR, Zhang D, Chaudhuri S . Table-GPT: table fine-tuned GPT for diverse table tasks. Proceedings of the ACM on Management of Data, 2024, 2( 3): 176

[146]

Pagnoni A, Fabbri AR, Kryscinski W, Wu CS. Socratic pretraining: question-driven pretraining for controllable summarization. In: Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). 2023, 12737–12755

[147]

Zhao L, Zheng F, Zeng W, He K, Xu W, Jiang H, Wu W, Wu Y. Domain-oriented prefix-tuning: towards efficient and generalizable fine-tuning for zero-shot dialogue summarization. In: Proceedings of 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. 2022, 4848–4862, doi: 10.18653/v1/2022.naacl-main.357

[148]

Yuan R, Wang Z, Cao Z, Li W. Few-shot query-focused summarization with prefix-merging. In: Proceedings of 2022 Conference on Empirical Methods in Natural Language Processing. 2022, 3704–3714

[149]

Feng X, Feng X, Du X, Kan MY, Qin B . Adapter-based selective knowledge distillation for federated multi-domain meeting summarization. IEEE/ACM Transactions on Audio, Speech, and Language Processing, 2024, 32: 3694–3708

[150]

Ravaut M, Chen H, Zhao R, Qin C, Joty S, Chen N. PromptSum: parameter-efficient controllable abstractive summarization. 2023, arXiv preprint arXiv: 2308.03117

[151]

Wang Y, Wang W, Joty S, Hoi SCH. CodeT5: identifier-aware unified pre-trained encoder-decoder models for code understanding and generation. In: Proceedings of 2021 Conference on Empirical Methods in Natural Language Processing. 2021, 8696–8708

[152]

Wang Y, Le H, Gotmare AD, Bui NDQ, Li J, Hoi SCH. CodeT5+: open code large language models for code understanding and generation. In: Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing. 2023, 1069–1088

[153]

Le H, Wang Y, Gotmare AD, Savarese S, Hoi SCH. CodeRL: mastering code generation through pretrained models and deep reinforcement learning. In: Proceedings of the 36th International Conference on Neural Information Processing Systems. 2022, 1549

[154]

Shojaee P, Jain A, Tipirneni S, Reddy CK. Execution-based code generation using deep reinforcement learning. Transactions on Machine Learning Research, 2023, 2023

[155]

Ayupov S, Chirkova N. Parameter-efficient finetuning of transformers for source code. 2022, arXiv preprint arXiv: 2212.05901

[156]

Zhuo TY, Zebaze A, Suppattarachai N, von Werra L, de Vries H, Liu Q, Muennighoff N. Astraios: parameter-efficient instruction tuning code large language models. 2024, arXiv preprint arXiv: 2401.00788

[157]

Weyssow M, Zhou X, Kim K, Lo D, Sahraoui H . Exploring parameter-efficient fine-tuning techniques for code generation with large language models. ACM Transactions on Software Engineering and Methodology, 2025, 34( 7): 204

[158]

Xu H, Kim YJ, Sharaf A, Awadalla HH. A paradigm shift in machine translation: boosting translation performance of large language models. In: Proceedings of the 12th International Conference on Learning Representations. 2024

[159]

Xu H, Sharaf A, Chen Y, Tan W, Shen L, Van Durme B, Murray K, Kim YJ. Contrastive preference optimization: pushing the boundaries of LLM performance in machine translation. In: Proceedings of the 41st International Conference on Machine Learning. 2024

[160]

Iyer V, Chen P, Birch A. Towards effective disambiguation for machine translation with large language models. In: Proceedings of the 8th Conference on Machine Translation. 2023, 482–495

[161]

Moslem Y, Haque R, Way A. Fine-tuning large language models for adaptive machine translation. 2023, arXiv preprint arXiv: 2312.12740

[162]

Üstün A, Stickland AC. When does parameter-efficient transfer learning work for machine translation? In: Proceedings of 2022 Conference on Empirical Methods in Natural Language Processing. 2022, 7919–7933

[163]

Wu B, Yuan F, Zhao H, Li L, Xu J. Extrapolating multilingual understanding models as multilingual generators. In: Proceedings of Findings of the Association for Computational Linguistics: EMNLP 2023. 2023, 15432–15444

[164]

Wu M, Vu TT, Qu L, Foster G, Haffari G. Adapting large language models for document-level machine translation. 2024, arXiv preprint arXiv: 2401.06468

[165]

Luo H, Sun Q, Xu C, Zhao P, Lou JG, Tao C, Geng X, Lin Q, Chen S, Tang Y, Zhang D. WizardMath: empowering mathematical reasoning for large language models via reinforced evol-instruct. In: Proceedings of the Thirteenth International Conference on Learning Representations. 2025

[166]

Yue X, Qu X, Zhang G, Fu Y, Huang W, Sun H, Su Y, Chen W. MammoTH: building math generalist models through hybrid instruction tuning. In: Proceedings of the 12th International Conference on Learning Representations. 2024

[167]

Ho N, Schmid L, Yun SY. Large language models are reasoning teachers. In: Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). 2023, 14852–14882, doi: 10.18653/v1/2023.acl-long.830

[168]

Schick T, Dwivedi-Yu J, Dessí R, Raileanu R, Lomeli M, Hambro E, Zettlemoyer L, Cancedda N, Scialom T. Toolformer: language models can teach themselves to use tools. In: Proceedings of the 37th International Conference on Neural Information Processing Systems. 2023, 2997

[169]

Hu Z, Wang L, Lan Y, Xu W, Lim EP, Bing L, Xu X, Poria S, Lee RKW. LLM-adapters: an adapter family for parameter-efficient fine-tuning of large language models. In: Proceedings of 2023 Conference on Empirical Methods in Natural Language Processing. 2023, 5254–5276

[170]

Shi W, Hu Z, Bin Y, Liu J, Yang Y, Ng SK, Bing L, Lee RKW. Math-LLaVA: bootstrapping mathematical reasoning for multimodal large language models. In: Proceedings of Findings of the Association for Computational Linguistics: EMNLP 2024. 2024, 4663–4680

[171]

Shao Z, Wang P, Zhu Q, Xu R, Song J, Bi X, Zhang H, Zhang M, Li YK, Wu Y, Guo D. DeepSeekMath: pushing the limits of mathematical reasoning in open language models. 2024, arXiv preprint arXiv: 2402.03300

[172]

Luo L, Liu Y, Liu R, Phatale S, Guo M, Lara H, Li Y, Shu L, Zhu Y, Meng L, Sun J, Rastogi A. Improve mathematical reasoning in language models by automated process supervision. 2024, arXiv preprint arXiv: 2406.06592

[173]

Chen C, Wang X, Lin TE, Lv A, Wu Y, Gao X, Wen JR, Yan R, Li Y. Masked thought: simply masking partial reasoning steps can improve mathematical reasoning learning of language models. In: Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). 2024, 5872–5900

[174]

Liu T, Guo Q, Yang Y, Hu X, Zhang Y, Qiu X, Zhang Z. Plan, verify and switch: integrated reasoning with diverse X-of-thoughts. In: Proceedings of 2023 Conference on Empirical Methods in Natural Language Processing. 2023, 2807–2822

[175]

Yu X, Zhou B, Cheng H, Roth D. ReasonAgain: using extractable symbolic programs to evaluate mathematical reasoning. 2024, arXiv preprint arXiv: 2410.19056

[176]

Ranaldi L, Valentino M, Freitas A. Improving chain-of-thought reasoning via quasi-symbolic abstractions. In: Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). 2025, 17222–17240

[177]

Srivastava G, Bi Z, Lu M, Wang X. DEBATE, TRAIN, EVOLVE: self evolution of language model reasoning. 2025, arXiv preprint arXiv: 2505.15734

[178]

Cai H, Yang Y, Li Z. System-2 mathematical reasoning via enriched instruction tuning. 2024, arXiv preprint arXiv: 2412.16964

[179]

Xu D, Chen W, Peng W, Zhang C, Xu T, Zhao X, Wu X, Zheng Y, Wang Y, Chen E . Large language models for generative information extraction: a survey. Frontiers of Computer Science, 2024, 18( 6): 186357

[180]

Siepmann RM, Baldini G, Schmidt CS, Truhn D, Müller-Franzes GA, Dada A, Kleesiek J, Nensa F, Hosch R . An automated information extraction model for unstructured discharge letters using large language models and GPT-4. Healthcare Analytics, 2025, 7: 100378

[181]

Gu B, Shao V, Liao Z, Carducci V, Brufau SR, Yang J, Desai RJ . Scalable information extraction from free text electronic health records using large language models. BMC Medical Research Methodology, 2025, 25( 1): 23

[182]

Xin Y, Luo S, Zhou H, Du J, Liu X, Fan Y, Li Q, Du Y. Parameter-efficient fine-tuning for pre-trained vision models: a survey. 2024, arXiv preprint arXiv: 2402.02242

[183]

Xin Y, Luo S, Liu X, Du Y, Zhou H, Cheng X, Lee C, Du J, Wang H, Chen M, Liu T, Hu G, Wan Z, Zhang R, Li A, Yi M, Liu X. V-PETL bench: a unified visual parameter-efficient transfer learning benchmark. In: Proceedings of the 38th International Conference on Neural Information Processing Systems. 2025, 2560

[184]

Qin L, Xie T, Che W, Liu T. A survey on spoken language understanding: recent advances and new frontiers. In: Proceedings of the 30th International Joint Conference on Artificial Intelligence., 2021, 4577–4584, doi: 10.24963/ijcai.2021/622

[185]

Sarikaya R, Crook PA, Marin A, Jeong M, Robichaud JP, Celikyilmaz A, Kim YB, Rochette A, Khan OZ, Liu X, Boies D, Anastasakos T, Feizollahi Z, Ramesh N, Suzuki H, Holenstein R, Krawczyk E, Radostev V. An overview of end-to-end language understanding and dialog management for personal digital assistants. In: Proceedings of 2016 IEEE Spoken Language Technology Workshop (SLT). 2016, 391–397

[186]

Yoon Y, Lee J, Kim K, Park C, Kim T. BlendX: complex multi-intent detection with blended patterns. In: Proceedings of 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024). 2024, 2428–2439

[187]

Qin L, Chen Q, Zhou J, Wang J, Fei H, Che W, Li M. Divide-solve-combine: an interpretable and accurate prompting framework for zero-shot multi-intent detection. In: Proceedings of the 39th AAAI Conference on Artificial Intelligence. 2025, 25038–25046

[188]

Qin L, Wei F, Chen Q, Zhou J, Huang S, Si J, Lu W, Che W. CroPrompt: cross-task interactive prompting for zero-shot spoken language understanding. In: Proceedings of 2025 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). 2025, 1–5

[189]

Dong W, Chen S, Yang Y. ProTOD: proactive task-oriented dialogue system based on large language model. In: Proceedings of the 31st International Conference on Computational Linguistics. 2025, 9147–9164

[190]

Acikgoz EC, Greer J, Datta A, Yang Z, Zeng W, Elachqar O, Koukoumidis E, Hakkani-Tür D, Tur G. Can a single model master both multi-turn conversations and tool use? CALM: a unified conversational agentic language model. In: Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). 2025, 12370–12390

[191]

Yin S, Huang P, Xu Y. MIDLM: multi-intent detection with bidirectional large language models. In: Proceedings of the 31st International Conference on Computational Linguistics. 2025, 2616–2625

[192]

Jin N, Siebert J, Li D, Chen Q. A survey on table question answering: recent advances. In: Proceedings of the 7th China Conference on Knowledge Graph and Semantic Computing: Knowledge Graph Empowers the Digital Economy. 2022, 174–186

[193]

Wang D, Dou L, Che W. A survey on table-and-text HybridQA: concepts, methods, challenges and future directions. 2022, arXiv preprint arXiv: 2212.13465

[194]

Zhang X, Wang D, Dou L, Zhu Q, Che W . A survey of table reasoning with large language models. Frontiers of Computer Science, 2025, 19( 9): 199348

[195]

Zhang X, Wang D, Xu K, Zhu Q, Che W. RoT: enhancing table reasoning with iterative row-wise traversals. 2025, arXiv preprint arXiv: 2505.15110

[196]

Shi T, Keneshloo Y, Ramakrishnan N, Reddy CK . Neural abstractive text summarization with sequence-to-sequence models. ACM Transactions on Data Science, 2018, 2( 1): 1

[197]

Godbole A, George JG, Shandilya S. Leveraging long-context large language models for multi-document understanding and summarization in enterprise applications. In: Proceedings of the 1st International Conference on Business Intelligence, Computational Mathematics, and Data Analytics. 2025, 208–224

[198]

Peters U, Chin-Yee B . Generalization bias in large language model summarization of scientific research. Royal Society Open Science, 2025, 12( 4): rsos241776

[199]

Yun J, Choi J, Jin K, Jang S, Jang J, Kim Y. SummPilot: bridging efficiency and customization for interactive summarization system. In: Proceedings of the 39th AAAI Conference on Artificial Intelligence. 2025, 29724–29726

[200]

Qorib MR, Hu Q, Ng HT. Just what you desire: constrained timeline summarization with self-reflection for enhanced relevance. In: Proceedings of the 39th AAAI Conference on Artificial Intelligence. 2025, 25065–25073

[201]

Zhu DH, Xiong YJ, Zhang JC, Xie XJ, Xia CM. Understanding before reasoning: enhancing chain-of-thought with iterative summarization pre-prompting. 2025, arXiv preprint arXiv: 2501.04341

[202]

Nandy A, Bandyopadhyay S. Language models of code are few-shot planners and reasoners for multi-document summarization with attribution. In: Proceedings of the 39th AAAI Conference on Artificial Intelligence. 2025, 24930–24938

[203]

Li Y, Peng B, He P, Galley M, Yu Z, Gao J. DIONYSUS: a pre-trained model for low-resource dialogue summarization. In: Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). 2023, 1368–1386

[204]

Wang L, Wu L, Song S, Wang Y, Gao C, Wang K. Distilling structured rationale from large language models to small language models for abstractive summarization. In: Proceedings of the 39th AAAI Conference on Artificial Intelligence. 2025, 25389–25397

[205]

Lu YJ, Hu TY, Koppula HS, Pouransari H, Chang JHR, Xia Y, Kong X, Zhu Q, Wang XS, Tuzel O, Vemulapalli R. Mutual reinforcement of LLM dialogue synthesis and summarization capabilities for few-shot dialogue summarization. In: Proceedings of Findings of the Association for Computational Linguistics: NAACL 2025. 2025, 7237–7256

[206]

Aali A, Van Veen D, Arefeen YI, Hom J, Bluethgen C, Reis EP, Gatidis S, Clifford N, Daws J, Tehrani AS, Kim J, Chaudhari AS . A dataset and benchmark for hospital course summarization with adapted large language models. Journal of the American Medical Informatics Association, 2025, 32( 3): 470–479

[207]

Wu J, Ning L, Liu L, Lee H, Wu N, Wang C, Prakash S, O’Banion S, Green B, Xie J. RLPF: reinforcement learning from prediction feedback for user summarization with LLMs. In: Proceedings of the 39th AAAI Conference on Artificial Intelligence. 2025, 25488–25496

[208]

Zhao H, Hui J, Howland J, Nguyen N, Zuo S, , . CodeGemma: open code models based on gemma. 2024, arXiv preprint arXiv: 2406.11409

[209]

Hui B, Yang J, Cui Z, Yang J, Liu D, , . Qwen2.5-coder technical report. 2024, arXiv preprint arXiv: 2409.12186

[210]

Dance-Seed B. Seed-coder: let the code model curate data for itself. See github.com/ByteDance-Seed/Seed-Coder/blob/master/Seed-Coder website, 2025

[211]

Cai W, Cao Y, Chen C, Chen C, Chen S, , . Every sample matters: leveraging mixture-of-experts and high-quality data for efficient and accurate code LLM. 2025, arXiv preprint arXiv: 2503.17793

[212]

Lu S, Guo D, Ren S, Huang J, Svyatkovskiy A, , . CodeXGLUE: a machine learning benchmark dataset for code understanding and generation. In: Proceedings of the 1st Neural Information Processing Systems Track on Datasets and Benchmarks. 2021

[213]

Ye Y, Zhang T, Jiang W, Huang H. Process-supervised reinforcement learning for code generation. 2025, arXiv preprint arXiv: 2502.01715

[214]

Zhang K, Li G, Li J, Dong Y, Jin Z. Focused-DPO: enhancing code generation through focused preference optimization on error-prone points. In: Proceedings of Findings of the Association for Computational Linguistics: ACL 2025. 2025, 9578–9591

[215]

Zeng H, Jiang D, Wang H, Nie P, Chen X, Chen W. ACECODER: acing coder RL via automated test-case synthesis. In: Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). 2025, 12023–12040

[216]

Wei Y, Duchenne O, Copet J, Carbonneaux Q, Zhang L, Fried D, Synnaeve G, Singh R, Wang SI. SWE-RL: advancing LLM reasoning via reinforcement learning on open software evolution. 2025, arXiv preprint arXiv: 2502.18449

[217]

Storhaug A, Li J. Parameter-efficient fine-tuning of large language models for unit test generation: an empirical study. 2024, arXiv preprint arXiv: 2411.02462

[218]

Zhang B, Liang P, Zhou X, Zhou X, Lo D, Feng Q, Li Z, Li L. A comprehensive evaluation of parameter-efficient fine-tuning on method-level code smell detection. 2024, arXiv preprint arXiv: 2412.13801

[219]

Bahdanau D, Cho K, Bengio Y. Neural machine translation by jointly learning to align and translate. In: Proceedings of 3rd International Conference on Learning Representations. 2015

[220]

Pang J, Ye F, Wong DF, Yu D, Shi S, Tu Z, Wang L . Salute the classic: revisiting challenges of machine translation in the age of large language models. Transactions of the Association for Computational Linguistics, 2025, 13: 73–95

[221]

Huang Y, Li B, Feng X, Huo W, Fu C, Liu T, Qin B. Aligning translation-specific understanding to general understanding in large language models. In: Proceedings of 2024 Conference on Empirical Methods in Natural Language Processing. 2024, 5028–5041

[222]

Zhu S, Cui M, Xiong D. Towards robust in-context learning for machine translation with large language models. In: Proceedings of 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024). 2024, 16619–16629

[223]

Feng Z, Cao S, Ren J, Su J, Chen R, Zhang Y, Xu Z, Hu Y, Wu J, Liu Z. MT-R1-zero: advancing LLM-based machine translation via R1-zero-like reinforcement learning. 2025, arXiv preprint arXiv: 2504.10160

[224]

Feng Z, Chen R, Zhang Y, Meng Z, Liu Z. Ladder: a model-agnostic framework boosting LLM-based machine translation to the next level. In: Proceedings of 2024 Conference on Empirical Methods in Natural Language Processing. 2024, 15377–15393

[225]

Lu P, Qiu L, Yu W, Welleck S, Chang KW. A survey of deep learning for mathematical reasoning. In: Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). 2023, 14605–14631

[226]

Yan Y, Wang S, Huo J, Yu PS, Hu X, Wen Q. Mathagent: leveraging a mixture-of-math-agent framework for real-world multimodal mathematical error detection. 2025, arXiv preprint arXiv: 2503.18132

[227]

OpenAI. GPT-4 technical report. 2023, arXiv preprint arXiv: 2303.08774

[228]

Tang Y, Zhan Y, Zan C, Lan L, Che Y. Elevating large language model reasoning ability with auto-enhanced zero-shot prompts. Mathematical Foundations of Computing, 2025

[229]

Yuksekgonul M, Bianchi F, Boen J, Liu S, Lu P, Huang Z, Guestrin C, Zou J . Optimizing generative AI by backpropagating language model feedback. Nature, 2025, 639( 8055): 609–616

[230]

Peng D, Zhou Y, Chen Q, Liu J, Chen J, Qin L. DLPO: towards a robust, efficient, and generalizable prompt optimization framework from a deep-learning perspective. 2025, arXiv preprint arXiv: 2503.13413

[231]

Zhang B, Liu Y, Dong X, Zang Y, Zhang P, Duan H, Cao Y, Lin D, Wang J. BoostStep: boosting mathematical capability of large language models via improved single-step reasoning. 2025, arXiv preprint arXiv: 2501.03226

[232]

Pang B, Dong H, Xu J, Savarese S, Zhou Y, Xiong C. BOLT: bootstrap long chain-of-thought in language models without distillation. 2025, arXiv preprint arXiv: 2502.03860

[233]

Yu Y, Zhang Y, Zhang D, Liang X, Zhang H, Zhang X, Khademi M, Awadalla HH, Wang J, Yang Y, Wei F. Chain-of-reasoning: towards unified mathematical reasoning in large language models via a multi-paradigm perspective. In: Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). 2025, 24914–24937

[234]

Qian C, Acikgoz EC, He Q, Wang H, Chen X, Hakkani-Tür D, Tur G, Ji H. ToolRL: reward is all tool learning needs. 2025, arXiv preprint arXiv: 2504.13958

[235]

Singh J, Chakraborty T, Nambi A. Self-evolved preference optimization for enhancing mathematical reasoning in small language models. 2025, arXiv preprint arXiv: 2503.04813

[236]

Prottasha NJ, Mahmud A, Sobuj MSI, Bhat P, Kowsher M, Yousefi N, Garibay OO . Parameter-efficient fine-tuning of large language models using semantic knowledge tuning. Scientific Reports, 2024, 14( 1): 30667

[237]

Alazraki L, Rei M. Meta-reasoning improves tool use in large language models. In: Proceedings of Findings of the Association for Computational Linguistics: NAACL 2025. 2025, 7885–7897

[238]

Qin L, Chen Q, Zhou Y, Chen Z, Li Y, Liao L, Li M, Che W, Yu PS. Multilingual large language model: a survey of resources, taxonomy and frontiers. 2024, arXiv preprint arXiv: 2404.04925

[239]

Winata G, Aji AF, Yong ZX, Solorio T. The decades progress on code-switching research in NLP: a systematic survey on trends and challenges. In: Proceedings of Findings of the Association for Computational Linguistics: ACL 2023. 2023, 2936–2978, doi: 10.18653/v1/2023.findings-acl.185

[240]

Li Z, Shi Y, Liu Z, Yang F, Payani A, Liu N, Du M. Language ranker: a metric for quantifying LLM performance across high and low-resource languages. In: Proceedings of the 27th AAAI Conference on Artificial Intelligence. 2025, 28186–28194

[241]

Wang P, Tao R, Chen Q, Hu M, Qin L. X-WebAgentBench: a multilingual interactive web benchmark for evaluating global agentic system. In: Proceedings of Findings of the Association for Computational Linguistics: ACL 2025. 2025, 19320–19335

[242]

Zhang Y, Liu X, Zhou R, Chen Q, Fei H, Lu W, Qin L. CCHaLL: a novel benchmark for joint cross-lingual and cross-modal hallucinations detection in large language models. In: Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). 2025, 30728–30749

[243]

Xue L, Constant N, Roberts A, Kale M, Al-Rfou R, Siddhant A, Barua A, Raffel C. mt5: a massively multilingual pre-trained text-to-text transformer. In: Proceedings of 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. 2021, 483–498

[244]

Le Scao T, Fan A, Akiki C, Pavlick E, Ilić S, , . BLOOM: a 176B-parameter open-access multilingual language model. 2022, arXiv preprint arXiv: 2211.05100

[245]

Chen P, Ji S, Bogoychev N, Kutuzov A, Haddow B, Heafield K. Monolingual or multilingual instruction tuning: which makes a better alpaca. In: Proceedings of Findings of the Association for Computational Linguistics: EACL 2024. 2024, 1347–1356

[246]

Cahyawijaya S, Lovenia H, Yu T, Chung W, Fung P. Instruct-align: teaching novel languages with to LLMs through alignment-based cross-lingual instruction. 2023, arXiv preprint arXiv: 2305.13627

[247]

Li J, Zhou H, Huang S, Cheng S, Chen J . Eliciting the translation ability of large language models via multilingual finetuning with translation instructions. Transactions of the Association for Computational Linguistics, 2024, 12: 576–592

[248]

Bajpai A, Chakraborty T. Multilingual LLMs inherently reward in-language time-sensitive semantic alignment for low-resource languages. In: Proceedings of the 39th AAAI Conference on Artificial Intelligence. 2025, 23469–23477

[249]

Winata GI, Madotto A, Lin Z, Liu R, Yosinski J, Fung P. Language models are few-shot multilingual learners. In: Proceedings of the 1st Workshop on Multilingual Representation Learning. 2021, 1–15

[250]

Shi F, Suzgun M, Freitag M, Wang X, Srivats S, Vosoughi S, Chung HW, Tay Y, Ruder S, Zhou D, Das D, Wei J. Language models are multilingual chain-of-thought reasoners. In: Proceedings of the 11th International Conference on Learning Representations. 2023

[251]

Lin XV, Mihaylov T, Artetxe M, Wang T, Chen S, Simig D, Ott M, Goyal N, Bhosale S, Du J, Pasunuru R, Shleifer S, Koura PS, Chaudhary V, O’Horo B, Wang J, Zettlemoyer L, Kozareva Z, Diab M, Stoyanov V, Li X. Few-shot learning with multilingual generative language models. In: Proceedings of 2022 Conference on Empirical Methods in Natural Language Processing. 2022, 9019–9052, doi: 10.18653/v1/2022.emnlp-main.616

[252]

Tanwar E, Dutta S, Borthakur M, Chakraborty T. Multilingual LLMs are better cross-lingual in-context learners with alignment. In: Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). 2023, 6292–6307, doi: 10.18653/v1/2023.acl-long.346

[253]

Qin L, Chen Q, Wei F, Huang S, Che W. Cross-lingual prompting: improving zero-shot chain-of-thought reasoning across languages. In: Proceedings of 2023 Conference on Empirical Methods in Natural Language Processing. 2023, 2695–2709

[254]

Huang H, Tang T, Zhang D, Zhao X, Song T, Xia Y, Wei F. Not all languages are created equal in LLMs: improving multilingual capability by cross-lingual-thought prompting. In: Proceedings of Findings of the Association for Computational Linguistics: EMNLP 2023. 2023, 12365–12394, doi: 10.18653/v1/2023.findings-emnlp.826

[255]

Huang Z, Xu X, Ni J, Zhu H, Wang C . Multimodal representation learning for recommendation in internet of things. IEEE Internet of Things Journal, 2019, 6( 6): 10675–10685

[256]

Wang Y, Wu S, Zhang Y, Yan S, Liu Z, Luo J, Fei H. Multimodal chain-of-thought reasoning: a comprehensive survey. 2025, arXiv preprint arXiv: 2503.12605

[257]

Li X, Qiao J, Yin S, Wu L, Gao C, Wang Z, Li X . A survey of multimodal fake news detection: a cross-modal interaction perspective. IEEE Transactions on Emerging Topics in Computational Intelligence, 2025, 9( 4): 2658–2675

[258]

Peng Y, Wang X, Wei Y, Pei J, Qiu W, Jian A, Hao Y, Pan J, Xie T, Ge L, Zhuang R, Song X, Liu Y, Zhou Y. Skywork R1V: pioneering multimodal reasoning with chain-of-thought. 2025, arXiv preprint arXiv: 2504.05599

[259]

Lu P, Mishra S, Xia T, Qiu L, Chang KW, Zhu SC, Tafjord O, Clark P, Kalyan A. Learn to explain: multimodal reasoning via thought chains for science question answering. In: Proceedings of the 36th International Conference on Neural Information Processing Systems. 2022, 182

[260]

Qin L, Huang S, Chen Q, Cai C, Zhang Y, Liang B, Che W, Xu R. MMSD2.0: towards a reliable multi-modal sarcasm detection system. In: Proceedings of Findings of the Association for Computational Linguistics: ACL 2023. 2023, 10834–10845

[261]

Qin L, Wang W, Chen Q, Che W. CLIPText: a new paradigm for zero-shot text classification. In: Proceedings of Findings of the Association for Computational Linguistics: ACL 2023. 2023, 1077–1088

[262]

Yang Z, Li L, Lin K, Wang J, Lin CC, Liu Z, Wang L. The dawn of LMMs: preliminary explorations with GPT-4V (ision). 2023, arXiv preprint arXiv: 2309.17421

[263]

Fei H, Wu S, Ji W, Zhang H, Zhang M, Lee ML, Hsu W. Video-of-thought: step-by-step video reasoning from perception to cognition. In: Proceedings of the 41st International Conference on Machine Learning. 2024, 13109–13125

[264]

Qin L, Chen Q, Fei H, Chen Z, Li M, Che W. What factors affect multi-modal in-context learning? an in-depth exploration. In: Proceedings of the 38th International Conference on Neural Information Processing Systems. 2024

[265]

Zhang Y, Liu X, Tao R, Chen Q, Fei H, Che W, Qin L. ViTCoT: video-text interleaved chain-of-thought for boosting video understanding in large language models. 2025, arXiv preprint arXiv: 2507.09876

[266]

Wang W, Lv Q, Yu W, Hong W, Qi J, Wang Y, Ji J, Yang Z, Zhao L, Song X, Xu J, Xu B, Li J, Dong Y, Ding M, Tang J. CogVLM: visual expert for pretrained language models. In: Proceedings of the 38th International Conference on Neural Information Processing Systems. 2024

[267]

Liu H, Li C, Li Y, Lee YJ. Improved baselines with visual instruction tuning. In: Proceedings of 2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). 2024, 26286–26296

[268]

Chen Q, Qin L, Zhang J, Chen Z, Xu X, Che W. M3CoT: a novel benchmark for multi-domain multi-step multi-modal chain-of-thought. In: Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). 2024, 8199–8221

[269]

Yang Z, Li L, Wang J, Lin K, Azarnasab E, Ahmed F, Liu Z, Liu C, Zeng M, Wang L. MM-REACT: prompting chatGPT for multimodal reasoning and action. 2023, arXiv preprint arXiv: 2303.11381

[270]

Lu P, Bansal H, Xia T, Liu J, Li C, Hajishirzi H, Cheng H, Chang KW, Galley M, Gao J. MathVista: evaluating mathematical reasoning of foundation models in visual contexts. In: Proceedings of the 12th International Conference on Learning Representations. 2024

[271]

Zhang Z, Zhang A, Li M, Zhao H, Karypis G, Smola A. Multimodal chain-of-thought reasoning in language models. Transactions on Machine Learning Research, 2024, 2024

[272]

Cheng Z, Chen Q, Zhang J, Fei H, Feng X, Che W, Li M, Qin L. CoMT: a novel benchmark for chain of multi-modal thought on large vision-language models. In: Proceedings of the 39th AAAI Conference on Artificial Intelligence. 2025, 23678–23686

[273]

Cheng Z, Chen Q, Xu X, Wang J, Wang W, Fei H, Wang Y, Wang AJ, Chen Z, Che W, Qin L. Visual thoughts: a unified perspective of understanding multimodal chain-of-thought. 2025, arXiv preprint arXiv: 2505.15510

[274]

Wu Y, Zhang P, Xiong W, Oguz B, Gee JC, Nie Y. The role of chain-of-thought in complex vision-language reasoning task. 2023, arXiv preprint arXiv: 2311.09193

[275]

Mitra C, Huang B, Darrell T, Herzig R. Compositional chain-of-thought prompting for large multimodal models. In: Proceedings of 2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). 2024, 14420–14431

[276]

Wang P, Zhang Y, Fei H, Chen Q, Wang Y, Si J, Lu W, Li M, Qin L. S3 agent: unlocking the power of VLLM for zero-shot multi-modal sarcasm detection. ACM Transactions on Multimedia Computing, Communications and Applications, 2024

[277]

Qin Y, Liang S, Ye Y, Zhu K, Yan L, Lu Y, Lin Y, Cong X, Tang X, Qian B, Zhao S, Hong L, Tian R, Xie R, Zhou J, Gerstein M, Li D, Liu Z, Sun M. ToolLLM: facilitating large language models to master 16000+ real-world APIs. In: Proceedings of the 12th International Conference on Learning Representations. 2024

[278]

Hu M, Mu Y, Yu XC, Ding M, Wu S, Shao W, Chen Q, Wang B, Qiao Y, Luo P. Tree-planner: efficient close-loop task planning with large language models. In: Proceedings of the 12th International Conference on Learning Representations. 2024

[279]

Shinn N, Cassano F, Gopinath A, Narasimhan KR, Yao S. Reflexion: language agents with verbal reinforcement learning. In: Proceedings of the 37th International Conference on Neural Information Processing Systems. 2023, 377

[280]

Wang L, Ma C, Feng X, Zhang Z, Yang H, Zhang J, Chen Z, Tang J, Chen X, Lin Y, Zhao WX, Wei Z, Wen J . A survey on large language model based autonomous agents. Frontiers of Computer Science, 2024, 18( 6): 186345

[281]

Zhu X, Chen Y, Tian H, Tao C, Su W, Yang C, Huang G, Li B, Lu L, Wang X, Qiao Y, Zhang Z, Dai J. Ghost in the minecraft: generally capable agents for open-world environments via large language models with text-based knowledge and memory. 2023, arXiv preprint arXiv: 2305.17144

[282]

Hu M, Chen T, Chen Q, Mu Y, Shao W, Luo P. HiAgent: hierarchical working memory management for solving long-horizon agent tasks with large language model. In: Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). 2025, 32779–32798

[283]

Zhang G, Niu L, Fang J, Wang K, Bai L, Wang X. Multi-agent architecture search via agentic supernet. 2025, arXiv preprint arXiv: 2502.04180

[284]

Yue Y, Zhang G, Liu B, Wan G, Wang K, Cheng D, Qi Y. MasRouter: learning to route LLMs for multi-agent systems. In: Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). 2025, 15549–15572

[285]

Yao S, Yu D, Zhao J, Shafran I, Griffiths TL, Cao Y, Narasimhan K. Tree of thoughts: deliberate problem solving with large language models. In: Proceedings of the 37th International Conference on Neural Information Processing Systems. 2023, 517

[286]

Chen W, Ma X, Wang X, Cohen WW. Program of thoughts prompting: disentangling computation from reasoning for numerical reasoning tasks. Transactions on Machine Learning Research, 2023, 2023

[287]

Lei B, Lin PH, Liao C, Ding C. Boosting logical reasoning in large language models through a new framework: the graph of thought. 2023, arXiv preprint arXiv: 2308.08614

[288]

Zhang Y, Chen Q, Zhou J, Wang P, Si J, Wang J, Lu W, Qin L. Wrong-of-thought: an integrated reasoning framework with multi-perspective verification and wrong information. In: Proceedings of Findings of the Association for Computational Linguistics: EMNLP 2024. 2024, 6644–6653

[289]

Muhlgay D, Ram O, Magar I, Levine Y, Ratner N, Belinkov Y, Abend O, Leyton-Brown K, Shashua A, Shoham Y. Generating benchmarks for factuality evaluation of language models. In: Proceedings of the 18th Conference of the European Chapter of the Association for Computational Linguistics (Volume 1: Long Papers). 2024, 49–66

[290]

Min S, Krishna K, Lyu X, Lewis M, Yih WT, Koh PW, Iyyer M, Zettlemoyer L, Hajishirzi H. FActScore: fine-grained atomic evaluation of factual precision in long form text generation. In: Proceedings of 2023 Conference on Empirical Methods in Natural Language Processing. 2023, 12076–12100

[291]

Adlakha V, BehnamGhader P, Lu XH, Meade N, Reddy S . Evaluating correctness and faithfulness of instruction-following models for question answering. Transactions of the Association for Computational Linguistics, 2024, 12: 681–699

[292]

Liu T, Zhang Y, Brockett C, Mao Y, Sui Z, Chen W, Dolan WB. A token-level reference-free hallucination detection benchmark for free-form text generation. In: Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). 2022, 6723–6737

[293]

Chang KK, Cramer M, Soni S, Bamman D. Speak, memory: an archaeology of books known to ChatGPT/GPT-4. In: Proceedings of 2023 Conference on Empirical Methods in Natural Language Processing. 2023, 7312–7327

[294]

Hartvigsen T, Gabriel S, Palangi H, Sap M, Ray D, Kamar E. ToxiGen: a large-scale machine-generated dataset for adversarial and implicit hate speech detection. In: Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). 2022, 3309–3326, doi: 10.18653/v1/2022.acl-long.234

[295]

Wan Y, Wang W, He P, Gu J, Bai H, Lyu MR. BiasAsker: measuring the bias in conversational AI system. In: Proceedings of the 31st ACM Joint European Software Engineering Conference and Symposium on the Foundations of Software Engineering. 2023, 515–527

[296]

Dhamala J, Sun T, Kumar V, Krishna S, Pruksachatkun Y, Chang KW, Gupta R. BOLD: dataset and metrics for measuring biases in open-ended language generation. In: Proceedings of 2021 ACM Conference on Fairness, Accountability, and Transparency. 2021, 862–872

[297]

Ganguli D, Lovitt L, Kernion J, Askell A, Bai Y, , . Red teaming language models to reduce harms: methods, scaling behaviors, and lessons learned. 2022, arXiv preprint arXiv: 2209.07858

[298]

Sun H, Zhang Z, Deng J, Cheng J, Huang M. Safety assessment of Chinese large language models. 2023, arXiv preprint arXiv: 2304.10436

[299]

Pan W, Liu Z, Chen Q, Zhou X, Yu H, Jia X. The hidden dimensions of LLM alignment: a multi-dimensional safety analysis. 2025, arXiv preprint arXiv: 2502.09674

[300]

Yu M, Meng F, Zhou X, Wang S, Mao J, Pan L, Chen T, Wang K, Li X, Zhang Y, An B, Wen Q. A survey on trustworthy LLM agents: threats and countermeasures. In: Proceedings of the 31st ACM SIGKDD Conference on Knowledge Discovery and Data Mining V.2. 2025, 6216–6226

[301]

Xu Y, Hu L, Zhao J, Qiu Z, Xu K, Ye Y, Gu H . A survey on multilingual large language models: corpora, alignment, and bias. Frontiers of Computer Science, 2025, 19( 11): 1911362

[302]

Li ZZ, Zhang D, Zhang ML, Zhang J, Liu Z, , . From system 1 to system 2: a survey of reasoning large language models. 2025, arXiv preprint arXiv: 2502.17419

[303]

Jaech A, Kalai A, Lerer A, Richardson A, El-Kishky A, , . OpenAI o1 system card. 2024, arXiv preprint arXiv: 2412.16720

[304]

Li LH, Hessel J, Yu Y, Ren X, Chang KW, Choi Y. Symbolic chain-of-thought distillation: small models can also “think” step-by-step. In: Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). 2023, 2665–2679, doi: 10.18653/v1/2023.acl-long.150

[305]

Wang P, Wang Z, Li Z, Gao Y, Yin B, Ren X. SCOTT: self-consistent chain-of-thought distillation. In: Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). 2023, 5546–5558

[306]

Chen Q, Qin L, Liu J, Peng D, Wang J, Hu M, Chen Z, Che W, Liu T. ECM: a unified electronic circuit model for explaining the emergence of in-context learning and chain-of-thought in large language model. 2025, arXiv preprint arXiv: 2502.03325

[307]

Lyu Q, Havaldar S, Stein A, Zhang L, Rao D, Wong E, Apidianaki M, Callison-Burch C. Faithful chain-of-thought reasoning. In: Proceedings of the 13th International Joint Conference on Natural Language Processing and the 3rd Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics. 2023, 305–329

[308]

Zeng S, Chang X, Xie M, Liu X, Bai Y, Pan Z, Xu M, Wei X. FutureSightDrive: thinking visually with spatio-temporal cot for autonomous driving. 2025, arXiv preprint arXiv: 2505.17685

[309]

Renze M, Guven E. Self-reflection in LLM agents: effects on problem-solving performance. 2024, arXiv preprint arXiv: 2405.06682

[310]

Balachandran V, Chen J, Chen L, Garg S, Joshi N, Lara Y, Langford J, Nushi B, Vineet V, Wu Y, Yousefi S. Inference-time scaling for complex tasks: where we stand and what lies ahead. 2025, arXiv preprint arXiv: 2504.00294

[311]

Wu Y, Sun Z, Li S, Welleck S, Yang Y. Inference scaling laws: an empirical analysis of compute-optimal inference for problem-solving with language models. 2024, arXiv preprint arXiv: 2408.00724

[312]

Yu Q, Zhang Z, Zhu R, Yuan Y, Zuo X, , . DAPO: an open-source llm reinforcement learning system at scale. 2025, arXiv preprint arXiv: 2503.14476

[313]

Yue Y, Yuan Y, Yu Q, Zuo X, Zhu R, , . VAPO: efficient and reliable reinforcement learning for advanced reasoning tasks. 2025, arXiv preprint arXiv: 2504.05118

[314]

Chen J, Fan T, Liu X, Liu L, Lin Z, , . Seed-thinking-v1.5: advancing superb reasoning models with reinforcement learning. 2025, arXiv preprint arXiv: 2504.13914

[315]

Duan K, Liu Z, Mao X, Pang T, Chen C, Chen Q, Shieh MQ, Dou L. Efficient process reward model training via active learning. 2025, arXiv preprint arXiv: 2504.10559

[316]

Sui Y, Chuang YN, Wang G, Zhang J, Zhang T, Yuan J, Liu H, Wen A, Zhong S, Zou N, Chen H, Hu X. Stop overthinking: a survey on efficient reasoning for large language models. Transactions on Machine Learning Research, 2025, 2025

[317]

Feng S, Fang G, Ma X, Wang X. Efficient reasoning models: a survey. 2025, arXiv preprint arXiv: 2504.10903

[318]

Hou B, Zhang Y, Ji J, Liu Y, Qian K, Andreas J, Chang S. ThinkPrune: pruning long chain-of-thought of llms via reinforcement learning. 2025, arXiv preprint arXiv: 2504.01296

[319]

Chen Q, Qin L, Liu J, Liao Y, Wang J, Zhou J, Che W. RBF++: quantifying and optimizing reasoning boundaries across measurable and unmeasurable capabilities for chain-of-thought reasoning. 2025, arXiv preprint arXiv: 2505.13307

[320]

Qi J, Xu Z, Shen Y, Liu M, Jin D, Wang Q, Huang L. The art of SOCRATIC QUESTIONING: recursive thinking with large language models. In: Proceedings of 2023 Conference on Empirical Methods in Natural Language Processing. 2023, 4177–4199

[321]

Paul D, Ismayilzada M, Peyrard M, Borges B, Bosselut A, West R, Faltings B. REFINER: reasoning feedback on intermediate representations. In: Proceedings of the 18th Conference of the European Chapter of the Association for Computational Linguistics (Volume 1: Long Papers). 2024, 1100–1126

[322]

Madaan A, Tandon N, Gupta P, Hallinan S, Gao L, Wiegreffe S, Alon U, Dziri N, Prabhumoye S, Yang Y, Gupta S, Majumder BP, Hermann K, Welleck S, Yazdanbakhsh A, Clark P. SELF-REFINE: iterative refinement with self-feedback. In: Proceedings of the 37th International Conference on Neural Information Processing Systems. 2023, 2019

[323]

Li Y, Shen X, Yao X, Ding X, Miao Y, Krishnan R, Padman R. Beyond single-turn: a survey on multi-turn interactions with large language models. 2025, arXiv preprint arXiv: 2504.04717

[324]

Yao S, Zhao J, Yu D, Du N, Shafran I, Narasimhan KR, Cao Y. ReAct: synergizing reasoning and acting in language models. In: Proceedings of the 11th International Conference on Learning Representations. 2023

[325]

Chen Z, Chen Q, Qin L, Guo Q, Lv H, Zou Y, Yan H, Chen K, Lin D. What are the essential factors in crafting effective long context multi-hop instruction datasets? Insights and best practices. In: Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). 2025, 27129–27151

RIGHTS & PERMISSIONS

The Author(s) 2025. This article is published with open access at link.springer.com and journal.hep.com.cn

AI Summary AI Mindmap
PDF (2632KB)

Supplementary files

Highlights

454

Accesses

0

Citation

Detail

Sections
Recommended

AI思维导图

/