2022
DOI: 10.48550/arxiv.2201.11473
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Reasoning Like Program Executors

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
5
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
2
1

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(5 citation statements)
references
References 0 publications
0
5
0
Order By: Relevance
“…The need to prepare students for an increasingly complex world has led to the development of several new training methodologies (e.g., Pi et al, 2022;Wang et al, 2022;Mentzer et al, 2023). Pi and colleagues (2022) provide POET, a pretraining paradigm for basic reasoning that they call simply POET-Math and POET-Logic, as well as the more complex POET-SQL.…”
Section: The Reasoning For Complexity Competencymentioning
confidence: 99%
“…The need to prepare students for an increasingly complex world has led to the development of several new training methodologies (e.g., Pi et al, 2022;Wang et al, 2022;Mentzer et al, 2023). Pi and colleagues (2022) provide POET, a pretraining paradigm for basic reasoning that they call simply POET-Math and POET-Logic, as well as the more complex POET-SQL.…”
Section: The Reasoning For Complexity Competencymentioning
confidence: 99%
“…It consists of various sub-skills including commonsense reasoning (Zellers et al, 2018;Talmor et al, 2019;Bhagavatula et al, 2019), numerical reasoning (Dua et al, 2019), arithmetic reasoning (Koncel-Kedziorski et al, 2015;Roy and Roth, 2016;Miao et al, 2020;Cobbe et al, 2021), logical reasoning (Yu et al, 2020), tabular reasoning (Zhu et al, 2021), and so on. Previous efforts in machine learning exploited symbolic systems (Mihaylov and Frank, 2018;Ding et al, 2019;Wang et al, 2022b,a) and pre-training strategies (Deng et al, 2021;Asai and Hajishirzi, 2020;Pi et al, 2022). Recently, large language models with chain-of-thought prompting (Wei et al, 2022b;Wang et al, 2022c;Zhou et al, 2022;Zhang et al, 2022b) demonstrate promising reasoning abilities with appropriately designed prompts, achieving competitive performance on several benchmarks.…”
Section: Reasoning Abilitymentioning
confidence: 99%
“…We also compare with GenBERT and POET BART (Pi et al, 2022) 9 numbers as reported in the respective papers. GenBERT is a BERT-large model specialized for DROP dataset trained with synthetic data proposed by .…”
Section: Teabreac Improves Model Performancementioning
confidence: 99%
“…Yoran et al ( 2022) created a synthetic dataset using 13 handcrafted multihop QA reasoning patterns applied on wikipedia tables. Lastly, Pi et al (2022) showed that pretraining language models on synthetic dataset derived from input and output of program executors (arithmetic, logic-based and SQL-based) can also improve downstream QA performance. In contrast to these works, we use actual questions from a wide range of real datasets to teach a broad range of multi-hop reasoning skills.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation