“…Intriguing chainof-thought (CoT) techniques have greatly fully exploited the emergent ability of LLMs by eliciting them to decompose multi-step reasoning. Recent work in this field can be broadly classified into four categories: (i) Improving the performance of general-purpose reasoning tasks (Wei et al, 2022;Kojima et al, 2022a;Wang et al, 2022b;Zhou et al, 2022;Fu et al, 2022), i.e., arithmetic, symbolic, logical, and commonsense reasoning; (ii) Applying to domain-specific reasoning, such as multi-modality , or some purely linguistic tasks, such as translation (He et al, 2023), summarization , sentiment analysis (Fei et al, 2023), question-answer , etc; (iii) Analyzing the mechanics and interpretability of CoT (Wang et al, 2022a;Lyu et al, 2023); (iv) Distilling CoT techniques for smaller models (Ho et al, 2022;Kim et al, 2023).…”