Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP) 2020
DOI: 10.18653/v1/2020.emnlp-main.111
|View full text |Cite
|
Sign up to set email alerts
|

Towards Medical Machine Reading Comprehension with Structural Knowledge and Plain Text

Abstract: Machine reading comprehension (MRC) has achieved significant progress on the open domain in recent years, mainly due to large-scale pre-trained language models. However, it performs much worse in specific domains such as the medical field due to the lack of extensive training data and professional structural knowledge neglect. As an effort, we first collect a large scale medical multi-choice question dataset (more than 21k instances) for the National Licensed Pharmacist Examination in China. It is a challengin… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
19
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
3
3
2
1

Relationship

0
9

Authors

Journals

citations
Cited by 31 publications
(20 citation statements)
references
References 40 publications
(32 reference statements)
0
19
0
Order By: Relevance
“…ALBERT has been applied to some tasks, such as natural language inference [ 32 ], sentiment analysis [ 33 ], causality analysis [ 34 ], and medical machine reading [ 35 ]. The self-attention structure is the core part of the transformer mechanism.…”
Section: Methodsmentioning
confidence: 99%
“…ALBERT has been applied to some tasks, such as natural language inference [ 32 ], sentiment analysis [ 33 ], causality analysis [ 34 ], and medical machine reading [ 35 ]. The self-attention structure is the core part of the transformer mechanism.…”
Section: Methodsmentioning
confidence: 99%
“…However, it contains a relatively small variety of medical questions, automatically generated from clinical case reports. Recently, (Li et al, 2020) propose a multi-choice Chinese medical QA dataset, retrieving text snippets as the passage and the task only chooses an existing correct option from candidate set. Our work specifically focuses on the fine-grained medical MRC tasks and deep domain knowledge reasoning, with a manually constructed high-quality dataset released.…”
Section: Related Workmentioning
confidence: 99%
“…We evaluate CMedBERT on CMedMRC, and compare it against six strong baselines: DrQA (Chen et al, 2017), BERT base , ERNIE , KT-NET , MC-BERT (Zhang et al, 2020) and KMQA (Li et al, 2020). KT-NET is the first model to leverage rich knowledge to enhance pre-trained language models for MRC.…”
Section: Experimental Setupsmentioning
confidence: 99%
“…ALBERT has been applied to some tasks, such as natural language inference [32], sentiment analysis [33], causality analysis [34], and medical machine reading [35]. The self-attention structure is the core part of the transformer mechanism.…”
Section: Self-ensemble Albert Modelmentioning
confidence: 99%