2021
DOI: 10.1609/aaai.v35i6.16629
|View full text |Cite
|
Sign up to set email alerts
|

Learning by Fixing: Solving Math Word Problems with Weak Supervision

Abstract: Previous neural solvers of math word problems (MWPs) are learned with full supervision and fail to generate diverse solutions. In this paper, we address this issue by introducing a weakly-supervised paradigm for learning MWPs. Our method only requires the annotations of the final answers and can generate various solutions for a single problem. To boost weakly-supervised learning, we propose a novel learning-by-fixing (LBF) framework, which corrects the misperceptions of the neural network via symbolic reasonin… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
26
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
5
2
2

Relationship

1
8

Authors

Journals

citations
Cited by 33 publications
(26 citation statements)
references
References 29 publications
0
26
0
Order By: Relevance
“…Past works have conducted offline evaluation of GPT-3 [18], the predecessor to the LLM ChatGPT is based on, in computer science education to automatically generate code and error explanations [19][20][21]. GPT-3 has also been applied to math word problems and evaluated on its ability to generate variations of a word problem [22]. Below, we present a literature review of work using other methods to automatically generate hints, provide additional background on LLMs, and contextualize the use of ChatGPT in education.…”
Section: Related Workmentioning
confidence: 99%
“…Past works have conducted offline evaluation of GPT-3 [18], the predecessor to the LLM ChatGPT is based on, in computer science education to automatically generate code and error explanations [19][20][21]. GPT-3 has also been applied to math word problems and evaluated on its ability to generate variations of a word problem [22]. Below, we present a literature review of work using other methods to automatically generate hints, provide additional background on LLMs, and contextualize the use of ChatGPT in education.…”
Section: Related Workmentioning
confidence: 99%
“…M op and M con are two trainable embeddings for operators and constants, respectively. For a numeric value in V num , its token embedding takes the corresponding hidden state h i loc(yt,T,P ) , where loc(y t , T, P ) is the index position of y in table T or paragraph P (Hong et al, 2021).…”
Section: Tree-based Decoder Modulementioning
confidence: 99%
“…Researchers recently focus on solving math word problems using neural networks (Ling et al 2017;Wang, Liu, and Shi 2017a;Huang et al 2018a;Robaidek, Koncel-Kedziorski, and Hajishirzi 2018;Wang et al 2018Wang et al , 2019Chiang and Chen 2019;Xie and Sun 2019;Zhang et al 2020;Hong et al 2021). The mere translation from a text to an equation neglects the intermediate process required by problem solving, thus lacking interpretability.…”
Section: Related Workmentioning
confidence: 99%
“…All rights reserved. from the community of artificial intelligence and natural language processing often mix them with other types of problems, such as number problems and geometry problems, into one whole task called Math Word Problems (MWPs) (Wang, Liu, and Shi 2017a;Huang et al 2016;Amini et al 2019) Recent works on Math Word Problems (Wang, Liu, and Shi 2017a;Huang et al 2018a;Wang et al 2018;Xie and Sun 2019;Hong et al 2021) focused on using end-to-end neural networks (e.g., Seq2Seq, Seq2Tree) to directly translate a problem text into an expression, which is then executed to get the final answer. Although they seem to obtain satisfying performance, such end-to-end neural models suffer from the following drawbacks:…”
Section: Introductionmentioning
confidence: 99%