2021
DOI: 10.1016/j.jbi.2020.103637
|View full text |Cite
|
Sign up to set email alerts
|

Language models are an effective representation learning technique for electronic health record data

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
57
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
4
3
1

Relationship

2
6

Authors

Journals

citations
Cited by 66 publications
(65 citation statements)
references
References 19 publications
0
57
0
Order By: Relevance
“…Among the 25 studies applying feature-representation transfer, support vector machines and 'vanilla' neural networks were the most common methods used as the final model producing outputs based on the feature representations. Seven studies applied and compared multiple methods for this purpose [18][19][20][21][22][23][24].…”
Section: Resultsmentioning
confidence: 99%
“…Among the 25 studies applying feature-representation transfer, support vector machines and 'vanilla' neural networks were the most common methods used as the final model producing outputs based on the feature representations. Seven studies applied and compared multiple methods for this purpose [18][19][20][21][22][23][24].…”
Section: Resultsmentioning
confidence: 99%
“…Among the 25 studies applying feature-representation transfer, support vector machines and 'vanilla' neural networks were the most common methods used as the final model producing outputs based on the feature representations. Seven studies applied and compared multiple methods for this purpose [18][19][20][21][22][23][24].…”
Section: Resultsmentioning
confidence: 99%
“…Tabular data is probably the most used data type within clinical research. However, we only identified 15 studies using transfer learning on tabular data covering very different fields in medicine: two-thirds of them were from genetics [98][99][100][101][102], pathology [103][104][105], and intensive care [18,106], while the remaining five were from surgery [17], neonatology [107], infectious disease [108], pulmonology [109], and pharmacology [110]. Oncological applications like classification of cancer and prediction of cancer survival were common among the studies in genetics or pathology.…”
Section: Tabular Datamentioning
confidence: 99%
“…In 2018, Zhang et al used the Word2Vec approach in Patient2Vec [3] to learn a personalized representation for each patient to use it in future hospitalization prediction. In a recent research, Steinberg et al [4] suggest using word embeddings to represent medical codes and train their model to predict the medical codes contained in a medical visit using the codes contained in previous visits.…”
Section: Natural Language Processing and Patient Representation Learningmentioning
confidence: 99%
“…End-to-end models, like Patient2Vec [3], are trained on the specific task of predicting a defined medical outcome. Non end-to-end models, such as CLMBR [4], are models that first learn patient representations (in this case, by constructing a clinical language model) independently of any prediction task. The learned representations are then tested by using them as inputs in several clinical prediction models such as logistic regressions or gradient boosted trees.…”
Section: Learning Patient Representationsmentioning
confidence: 99%