Select-and-Answer Prompting:Facilitating LLMs for Improving Zero-Shot Reasoning
Yufang WANG , Xuesong TANG , Kuangrong HAO
Journal of Donghua University(English Edition) ›› 2025, Vol. 42 ›› Issue (5) : 513 -522.
Select-and-Answer Prompting:Facilitating LLMs for Improving Zero-Shot Reasoning
Large language models(LLMs) have demonstrated remarkable generalization abilities across multiple tasks in natural language processing(NLP). For multi-step reasoning tasks, chain-of-thought(CoT) prompting facilitates step-by-step thinking, leading to improved performance. However, despite significant advancements in LLMs, current CoT prompting performs suboptimally on smaller-scale models that have fewer parameters. Additionally, the common paradigm of few-shot CoT prompting relies on a set of manual demonstrations, with performance contingent on the quality of these annotations and varying with task-specific requirements. To address these limitations, we propose a select-and-answer prompting method(SAP) to enhance language model performance on reasoning tasks without the need for manual demonstrations. This method comprises two primary steps: guiding the model to conduct preliminary analysis and generate several candidate answers based on the prompting; allowing the model to provide final answers derived from these candidate answers. The proposed prompting strategy is evaluated across two language models of varying sizes and six datasets. On ChatGLM-6B, SAP consistently outperforms few-shot CoT across all datasets. For GPT-3.5, SAP achieves comparable performance to few-shot CoT and outperforms zero-shot CoT in most cases. These experimental results indicate that SAP can significantly improve the accuracy of language models in reasoning tasks.
zero-shot learning / large language model(LLM) / reasoning problem / chain-of-thought(CoT) prompting
| [1] |
|
| [2] |
|
| [3] |
|
| [4] |
|
| [5] |
|
| [6] |
|
| [7] |
|
| [8] |
|
| [9] |
|
| [10] |
|
| [11] |
|
| [12] |
|
| [13] |
|
| [14] |
|
| [15] |
|
| [16] |
|
| [17] |
|
| [18] |
|
| [19] |
|
| [20] |
|
| [21] |
|
| [22] |
|
| [23] |
|
| [24] |
|
| [25] |
|
| [26] |
|
| [27] |
|
| [28] |
|
| [29] |
|
| [30] |
|
| [31] |
|
National Natural Science Foundation of China(62176052)
/
| 〈 |
|
〉 |