Take a Step Back: Evoking Reasoning via Abstraction in Large Language Models
190citations
arXiv:2310.06117190
citations
#169
in ICLR 2024
of 2297 papers
7
Top Authors
4
Data Points
Abstract
We present STEP-BACK PROMPTING, a simple prompting technique that enables LLMs to do abstractions to derive high-level concepts and first principles from instances containing specific details. Using the concepts and principles to guide reasoning, LLMs significantly improve their abilities in following a correct reasoning path towards the solution. We conduct experiments of STEP-BACK PROMPTING with PaLM-2L, GPT-4 and Llama2-70B models, and observe substantial performance gains on various challenging reasoning-intensive tasks including STEM, Knowledge QA, and Multi-Hop Reasoning. For instance, STEP-BACK PROMPTING improves PaLM-2L performance on MMLU (Physics and Chemistry) by 7% and 11% respectively, TimeQA by 27%, and MuSiQue by 7%.
Citation History
Jan 28, 2026
0
Feb 13, 2026
190+190
Feb 13, 2026
190
Feb 13, 2026
190