Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP) 2020
DOI: 10.18653/v1/2020.emnlp-main.189
|View full text |Cite
|
Sign up to set email alerts
|

Context-Aware Answer Extraction in Question Answering

Abstract: Extractive QA models have shown very promising performance in predicting the correct answer to a question for a given passage. However, they sometimes result in predicting the correct answer text but in a context irrelevant to the given question. This discrepancy becomes especially important as the number of occurrences of the answer text in a passage increases. To resolve this issue, we propose BLANC (BLock AttentioN for Context prediction) based on two main ideas: context prediction as an auxiliary task in m… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

1
13
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
4
3
1

Relationship

0
8

Authors

Journals

citations
Cited by 19 publications
(14 citation statements)
references
References 16 publications
(16 reference statements)
1
13
0
Order By: Relevance
“…The ebisu_uit team presents a novel method for training Vietnamese reading comprehension. To tackle the Machine reading comprehension task in Vietnamese, they apply BLANC (BLock AttentioN for Context prediction) [32] on pretrained language models. With this strategy, this model produced good results.…”
Section: The Ebisu_uit Teammentioning
confidence: 99%
“…The ebisu_uit team presents a novel method for training Vietnamese reading comprehension. To tackle the Machine reading comprehension task in Vietnamese, they apply BLANC (BLock AttentioN for Context prediction) [32] on pretrained language models. With this strategy, this model produced good results.…”
Section: The Ebisu_uit Teammentioning
confidence: 99%
“…However, most previous research did not focus on giving answers in the context of the question. BLANC [4] is proposed that solved this problem, and we applied it to our reading module.…”
Section: Related Workmentioning
confidence: 99%
“…According to the published BLANC article [4], using BLANC to help increase the readability of the model resulted in an accuracy increase of 2-3% compared to the baseline models on English datasets. The baseline models will extract the answer with the highest probability from the given passage.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…Recently, Machine Reading Comprehension (MRC) has attracted wide attention and achieved remarkable success when solving specific tasks in stationary environments, such as answering factual questions with wikipedia articles or answering narrative questions with web search logs (Seo et al, 2017;Seonwoo et al, 2020;Zhang et al, 2021;Wu and Xu, 2020). However, the answering scenario changes over time in real-world applications.…”
Section: Introductionmentioning
confidence: 99%