2019
DOI: 10.1093/jamia/ocz096
|View full text |Cite
|
Sign up to set email alerts
|

Enhancing clinical concept extraction with contextual embeddings

Abstract: Neural network-based representations ("embeddings") have dramatically advanced natural language processing (NLP) tasks, including clinical NLP tasks such as concept extraction. Recently, however, more advanced embedding methods and representations (e.g., ELMo, BERT) have further pushed the stateof-the-art in NLP, yet there are no common best practices for how to integrate these representations into clinical tasks. The purpose of this study, then, is to explore the space of possible options in utilizing these n… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

1
167
1

Year Published

2019
2019
2024
2024

Publication Types

Select...
4
3
2

Relationship

0
9

Authors

Journals

citations
Cited by 221 publications
(169 citation statements)
references
References 35 publications
(51 reference statements)
1
167
1
Order By: Relevance
“…They release a pre-trained ELMo model along with their work, enabling further clinical NLP research to work with these powerful contextual embeddings. (Si et al, 2019), released in late February 2019, train a clinical note corpus BERT language model and uses complex task-specific models to yield improvements over both traditional embeddings and ELMo embeddings on the i2b2 2010 and 2012 tasks (Sun et al, 2013b,a) and the SemEval 2014 task 7 (Pradhan et al, 2014) and 2015 task 14 (Elhadad et al) tasks, establishing new state-of-theart results on all four corpora. However, this work neither releases their embeddings for the larger community nor examines the performance opportunities offered by fine-tuning BioBERT with clinical text or by training note-type specific embedding models, as we do.…”
Section: Contextual Clinical and Biomedical Embeddingsmentioning
confidence: 87%
See 1 more Smart Citation
“…They release a pre-trained ELMo model along with their work, enabling further clinical NLP research to work with these powerful contextual embeddings. (Si et al, 2019), released in late February 2019, train a clinical note corpus BERT language model and uses complex task-specific models to yield improvements over both traditional embeddings and ELMo embeddings on the i2b2 2010 and 2012 tasks (Sun et al, 2013b,a) and the SemEval 2014 task 7 (Pradhan et al, 2014) and 2015 task 14 (Elhadad et al) tasks, establishing new state-of-theart results on all four corpora. However, this work neither releases their embeddings for the larger community nor examines the performance opportunities offered by fine-tuning BioBERT with clinical text or by training note-type specific embedding models, as we do.…”
Section: Contextual Clinical and Biomedical Embeddingsmentioning
confidence: 87%
“…BERT has, in general, been found to be superior to ELMo and far superior to non-contextual embeddings on a variety of tasks, including those in the clinical domain (Si et al, 2019). For this reason, we only examine BERT here, rather than including ELMo or non-contextual embedding methods.…”
Section: Related Workmentioning
confidence: 99%
“…Chem-Prot (Peng et al, 2018); i2b2 (Rink et al, 2011); HoC (Du et al, 2019); MedNLI (Romanov and Shivade, 2018). P: PubMed, P+M: PubMed + MIMIC-III For named entity recognition, we used a Bi-LSTM-CRF implementation as a sequence tagger Si et al, 2019;Lample et al, 2016). Specifically, we concatenated the GloVe word embeddings (Pennington et al, 2014), character embeddings, and ELMo embeddings of each token and fed the combined vectors into the sequence tagger to predict the label for each token.…”
Section: Fine-tuning With Elmomentioning
confidence: 99%
“…While ELMo has been shown to outperform GloVe and Word2Vec on consumer health question answering (Kearns and Thomas, 2018), BERT has outperformed ELMo on various clinical tasks (Si et al, 2019) and has been fine-tuned and applied to the biomedical literature and clinical notes (Alsentzer et al, 2019;Huang et al, 2019;Si et al, 2019;Lee et al, 2019). BERT supports the transfer of a pretrained general purpose language model to a task-specific application through fine-tuning.…”
Section: Distributional Semanticsmentioning
confidence: 99%