Findings of the Association for Computational Linguistics: EMNLP 2021 2021
DOI: 10.18653/v1/2021.findings-emnlp.221
|View full text |Cite
|
Sign up to set email alerts
|

Learning Numeracy: A Simple Yet Effective Number Embedding Approach Using Knowledge Graph

Abstract: Numeracy plays a key role in natural language understanding. However, existing NLP approaches, either traditional word2vec approach or contextualized transformer-based language models, fail to learn numeracy. As the result, the performance of these models is limited when they are applied to number-intensive applications in clinical and financial domains. In this work, we propose a simple number embedding approach based on knowledge graph. We construct a knowledge graph consisting of number entities and magnitu… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
9
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
1
1

Relationship

0
6

Authors

Journals

citations
Cited by 7 publications
(11 citation statements)
references
References 13 publications
(22 reference statements)
0
9
0
Order By: Relevance
“…The proposed NR model is compared with previous SOTA phenotyping methods. In the unsupervised setting, we consider unsupervised baseline methods including NCBO, 6 NCR, 8 and the unsupervised model by Zhang et al 12 In the supervised setting, we consider ClinicalBERT 27 (which is fine-tuned in separate for phenotyping) and the supervised model by Zhang et al 12 The NCBO, NCR, and fine-tuned ClinicalBERT are selected as they show overall better performance than other baseline phenotyping methods in corresponding settings, as demonstrated by Zhang et al 12 We decide not to use recent NR models, [15][16][17][18] as baseline methods because none of them considers clinical knowledge and it is costly to adapt them to the clinical domain.…”
Section: Baselines and Evaluation Methodsmentioning
confidence: 99%
See 2 more Smart Citations
“…The proposed NR model is compared with previous SOTA phenotyping methods. In the unsupervised setting, we consider unsupervised baseline methods including NCBO, 6 NCR, 8 and the unsupervised model by Zhang et al 12 In the supervised setting, we consider ClinicalBERT 27 (which is fine-tuned in separate for phenotyping) and the supervised model by Zhang et al 12 The NCBO, NCR, and fine-tuned ClinicalBERT are selected as they show overall better performance than other baseline phenotyping methods in corresponding settings, as demonstrated by Zhang et al 12 We decide not to use recent NR models, [15][16][17][18] as baseline methods because none of them considers clinical knowledge and it is costly to adapt them to the clinical domain.…”
Section: Baselines and Evaluation Methodsmentioning
confidence: 99%
“…We decide not to use recent NR models, 15 –18 as baseline methods because none of them considers clinical knowledge and it is costly to adapt them to the clinical domain.…”
Section: Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…Please note the work by [38] publishes one unsupervised and one supervised model hence we compare the proposed NR model with both. We decide not to compare with recent numerical reasoning models (such as [34,9,31,13]) as none of them incorporates clinical knowledge and we find it costly to adapt them to the clinical domain.…”
Section: Baselines and Evaluation Methodsmentioning
confidence: 99%
“…Numerical Reasoning: Recent works publish new datasets for numerical reasoning [39] and utilise deep learning based models to develop the numerical reasoning skills [34,9,31,13] in respective domains other than the clinical domain. For example, [11] shows gains by using artificially created data on various tasks involving numeracy such as math word problems and reading comprehension.…”
Section: Related Workmentioning
confidence: 99%