Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics 2019
DOI: 10.18653/v1/p19-1619
|View full text |Cite
|
Sign up to set email alerts
|

Modeling Intra-Relation in Math Word Problems with Different Functional Multi-Head Attentions

Abstract: Several deep learning models have been proposed for solving math word problems (MWPs) automatically. Although these models have the ability to capture features without manual efforts, their approaches to capturing features are not specifically designed for MWPs. To utilize the merits of deep learning models with simultaneous consideration of MWPs' specific features, we propose a group attention mechanism to extract global features, quantity-related features, quantitypair features and question-related features … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
39
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
4
2
2

Relationship

0
8

Authors

Journals

citations
Cited by 61 publications
(47 citation statements)
references
References 23 publications
(25 reference statements)
0
39
0
Order By: Relevance
“…Inspired by the great success of Seq2Seq models in Neural Machine Translation, deep-learning based methods are intensively explored by researchers in the equation generation (Wang et al, 2017;Ling et al, 2017;Li et al, 2018Li et al, , 2019Zou and Lu, 2019;Xie and Sun, 2019). However, different forms of equations can be formed to solve the same math problem, which often makes models fail.…”
Section: Math Word Problemsmentioning
confidence: 99%
See 2 more Smart Citations
“…Inspired by the great success of Seq2Seq models in Neural Machine Translation, deep-learning based methods are intensively explored by researchers in the equation generation (Wang et al, 2017;Ling et al, 2017;Li et al, 2018Li et al, , 2019Zou and Lu, 2019;Xie and Sun, 2019). However, different forms of equations can be formed to solve the same math problem, which often makes models fail.…”
Section: Math Word Problemsmentioning
confidence: 99%
“…We report the solution accuracy for each baseline in test set. On MAWPS, our baselines are: i) Retrieval, Classification, and Seq2Seq (Robaidek et al, 2018); ii) Seq2Tree (Dong and Lapata, 2016); iii) Graph2Seq (Xu et al, 2018a); iv) MathDQN ; v) T-RNN ; vi) Group-Att (Li et al, 2019). On MATHQA, our baselines are: i) Sequence-to-program (Amini et al, 2019); ii) TP-N2F (Chen et al, 2019a); iii) Seq2Seq, Seq2Tree and Graph2Seq.…”
Section: Experiments For Math Word Problemsmentioning
confidence: 99%
See 1 more Smart Citation
“…To date, there exist limited literature on MWP generation. Most prior works focus on automatically answering MWPs, e.g., (Li et al, 2019(Li et al, , 2020Qin et al, 2020;Shi et al, 2015;Roy and Roth, 2015;Wu et al, 2020a) instead of generating them (Nandhini and Balasundaram, 2011;Williams, 2011;Polozov et al, 2015;Deane and Sheehan, 2003). Existing MWP generation methods also often generate MWPs that either are of unsatisfactory language quality or fail to preserve information on math equations and contexts that need to be embedded in them.…”
Section: Introductionmentioning
confidence: 99%
“…The research community has focused in solving mainly two types of mathematical word problems: arithmetic word problems (Hosseini et al, 2014;Mitra & Baral, 2016;Wang et al, 2017;Li et al, 2019;Chiang & Chen, 2019) and algebraic word problems (Kushman et al, 2014;Shi et al, 2015;Ling et al, 2017;Amini et al, 2019). Arithmetic word problems can be solved using basic mathematical operations (+, −, ×, ÷) and involve a single unknown variable.…”
Section: Introductionmentioning
confidence: 99%