Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP) 2020
DOI: 10.18653/v1/2020.emnlp-main.549
|View full text |Cite
|
Sign up to set email alerts
|

Question Directed Graph Attention Network for Numerical Reasoning over Text

Abstract: Numerical reasoning over texts, such as addition, subtraction, sorting and counting, is a challenging machine reading comprehension task, since it requires both natural language understanding and arithmetic computation. To address this challenge, we propose a heterogeneous graph representation for the context of the passage and question needed for such reasoning, and design a question directed graph attention network to drive multi-step numerical reasoning over this context graph. Our model, which combines dee… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
47
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
5
2
1

Relationship

0
8

Authors

Journals

citations
Cited by 39 publications
(52 citation statements)
references
References 17 publications
0
47
0
Order By: Relevance
“…HybridQA (Chen et al, 2020b) is one existing hybrid dataset for QA tasks, where the context is a table connected with Wiki pages via hyperlinks. Numerical Reasoning Numerical reasoning is key to many NLP tasks like question answering (Dua et al, 2019;Ran et al, 2019;Andor et al, 2019;Chen et al, 2020a;Pasupat and Liang, 2015;Herzig et al, 2020;Yin et al, 2020; and arithmetic word problems (Kushman et al, 2014;Mitra and Baral, 2016;Huang et al, 2017;Ling et al, 2017). To our best knowledge, no prior work attempts to develop models able to perform numerical reasoning over hybrid contexts.…”
Section: Results and Analysismentioning
confidence: 99%
“…HybridQA (Chen et al, 2020b) is one existing hybrid dataset for QA tasks, where the context is a table connected with Wiki pages via hyperlinks. Numerical Reasoning Numerical reasoning is key to many NLP tasks like question answering (Dua et al, 2019;Ran et al, 2019;Andor et al, 2019;Chen et al, 2020a;Pasupat and Liang, 2015;Herzig et al, 2020;Yin et al, 2020; and arithmetic word problems (Kushman et al, 2014;Mitra and Baral, 2016;Huang et al, 2017;Ling et al, 2017). To our best knowledge, no prior work attempts to develop models able to perform numerical reasoning over hybrid contexts.…”
Section: Results and Analysismentioning
confidence: 99%
“…GenBERT (Geva et al, 2020) pre-trains BERT (Devlin et al, 2019) with synthetic number and text data. QDGAT (Chen et al, 2020) designs a graph neural network with fully-connected number nodes of same entity type. While there are many other related works on this topic (Hu et al, 2019;Andor et al, 2019;Gupta et al, 2019;Min et al, 2019;Sundararaman et al, 2020;Saha et al, 2021), none of them address the problem of extrapolation in DROP.…”
Section: Related Workmentioning
confidence: 99%
“…Models To inspect the extrapolation capability among the existing models in DROP, we evaluate the following representative models in the leaderboard: NAQANet (the official baseline model in DROP), NumNet (Ran et al, 2019), Num-Net+(RoBERTa) and GenBERT (Geva et al, 2020). Although we mention QDGAT (Chen et al, 2020) in this paper, we did not evaluate it because its official implementation could not be reproduced.…”
Section: Empirical Investigation On Extrapolationmentioning
confidence: 99%
See 1 more Smart Citation
“…NumNet(Ran et al, 2019) leveraged Graph Neural Network (GNN) to design a number-aware deep learning model. Also leveraging GNN,Chen et al (2020a) distinguished number types more precisely by adding the connection with entities and obtained better performance Chen et al (2020b). searched possible programs exhaustively based on answer numbers and employed these programs as weak supervision to train the whole model.…”
mentioning
confidence: 99%