Proceedings of the 19th SIGBioMed Workshop on Biomedical Language Processing 2020
DOI: 10.18653/v1/2020.bionlp-1.7
|View full text |Cite
|
Sign up to set email alerts
|

A BERT-based One-Pass Multi-Task Model for Clinical Temporal Relation Extraction

Abstract: Recently BERT has achieved a state-of-theart performance in temporal relation extraction from clinical Electronic Medical Records text. However, the current approach is inefficient as it requires multiple passes through each input sequence. We extend a recently-proposed one-pass model for relation classification to a one-pass model for relation extraction. We augment this framework by introducing global embeddings to help with long-distance relation inference, and by multi-task learning to increase model perfo… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
43
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
4
2
2

Relationship

0
8

Authors

Journals

citations
Cited by 35 publications
(48 citation statements)
references
References 23 publications
0
43
0
Order By: Relevance
“…The pre-released version of BioBERT (January 2019) has already been shown to be very effective in many biomedical text mining tasks such as NER for clinical notes ( Alsentzer et al , 2019 ), human phenotype-gene RE ( Sousa et al , 2019 ) and clinical temporal RE ( Lin et al , 2019 ). The following updated versions of BioBERT will be available to the bioNLP community: (i) BioBERT and BioBERT trained on only PubMed abstracts without initialization from the existing BERT model and (ii) BioBERT and BioBERT trained on domain-specific vocabulary based on WordPiece.…”
Section: Discussionmentioning
confidence: 99%
“…The pre-released version of BioBERT (January 2019) has already been shown to be very effective in many biomedical text mining tasks such as NER for clinical notes ( Alsentzer et al , 2019 ), human phenotype-gene RE ( Sousa et al , 2019 ) and clinical temporal RE ( Lin et al , 2019 ). The following updated versions of BioBERT will be available to the bioNLP community: (i) BioBERT and BioBERT trained on only PubMed abstracts without initialization from the existing BERT model and (ii) BioBERT and BioBERT trained on domain-specific vocabulary based on WordPiece.…”
Section: Discussionmentioning
confidence: 99%
“…Previous research on extracting these relations (e.g. Bethard et al, 2017, Ning et al, 2017, Lin et al, 2019 almost always uses pair-wise TimeMLannotated data which has rich annotation but also inherits the above three complexity and consistency issues. To address these issues, Zhang and Xue (2018b) present a tree structure of relations between time expressions and events (TDT), along with a BiLSTM model (Zhang and Xue, 2018a) for parsing text into TDT and a crowd-sourced corpus (Zhang and Xue, 2019).…”
Section: Related Workmentioning
confidence: 99%
“…Model Choice: recent literature has shown some attempts to use neural models to classify temporal relations in text (Lin et al, 2019). We propose to use the BERT transformer model, to solve our anchor date relation problem.…”
Section: Relation Classification Approachmentioning
confidence: 99%
“…Input Definition: While not constrained within a sentence by previous models (Lin et al, 2019), BERT was not designed to solve problems of longdistance relations within a text. There is a limitation on the size of the input text sequences it can accept (512 tokens).…”
Section: Relation Classification Approachmentioning
confidence: 99%
See 1 more Smart Citation