Proceedings of the 18th BioNLP Workshop and Shared Task 2019
DOI: 10.18653/v1/w19-5055
|View full text |Cite
|
Sign up to set email alerts
|

Saama Research at MEDIQA 2019: Pre-trained BioBERT with Attention Visualisation for Medical Natural Language Inference

Abstract: Natural Language inference is the task of identifying relation between two sentences as entailment, contradiction or neutrality. MedNLI is a biomedical flavour of NLI for clinical domain. This paper explores the use of Bidirectional Encoder Representation from Transformer (BERT) for solving MedNLI. The proposed model, BERT pre-trained on PMC, PubMed and fine-tuned on MIMIC-III v1.4, achieves state of the art results on MedNLI (83.45%) and an accuracy of 78.5% in MEDIQA challenge. The authors present an analysi… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2019
2019
2022
2022

Publication Types

Select...
3
1
1

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(2 citation statements)
references
References 11 publications
(9 reference statements)
0
2
0
Order By: Relevance
“…The source of premise sentences in MedNLI is from MIMIC-III (Johnson et al, 2016), a large open-source clinical database. The dataset has been widely studied and benchmarked by the Biomedical NLP research community 11 (Peng et al, 2019; Phan et al, 2021a; El Boukkouri et al, 2020; Alrowili and Shanker, 2021; Kanakarajan et al, 2019).…”
Section: Vimednlimentioning
confidence: 99%
“…The source of premise sentences in MedNLI is from MIMIC-III (Johnson et al, 2016), a large open-source clinical database. The dataset has been widely studied and benchmarked by the Biomedical NLP research community 11 (Peng et al, 2019; Phan et al, 2021a; El Boukkouri et al, 2020; Alrowili and Shanker, 2021; Kanakarajan et al, 2019).…”
Section: Vimednlimentioning
confidence: 99%
“…To incorporate unstructured knowledge into the stance detection models, WS-BERT [4] directly infuses external knowledge from Wikipedia as its inputs to pre-trained models for stance detection in the VAST dataset. Another knowledge infusion paradigm is that finetune PLMs on the specific domain corpus to embed the domain-specific knowledge, as demonstrated by Sci-BERT [17], Bio-BERT [18], BERTweet [19]. In addition to domain-specific finetuning, Self-talk [20] offers another interesting solution by exploring knowledge from its own training corpus with hand-crafted prompts, enhancing language model learning with task-related knowledge.…”
Section: A Knowledge Enhancementmentioning
confidence: 99%