Proceedings of the Conference on Health, Inference, and Learning 2021
DOI: 10.1145/3450439.3451877
|View full text |Cite
|
Sign up to set email alerts
|

A comprehensive EHR timeseries pre-training benchmark

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

1
34
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
6

Relationship

1
5

Authors

Journals

citations
Cited by 21 publications
(39 citation statements)
references
References 24 publications
1
34
0
Order By: Relevance
“…Instead of a 2-step process, we use a single-step process in which all the tasks are jointly trained under a multitask framework. The improved label efficiency of SeqSNR corresponds with the few-shot learning experiments conducted by McDermott et al, 23 which found that MTL pretraining preserved performance at subsampling rates as low as 0.1% of training data.…”
Section: Discussionsupporting
confidence: 54%
See 4 more Smart Citations
“…Instead of a 2-step process, we use a single-step process in which all the tasks are jointly trained under a multitask framework. The improved label efficiency of SeqSNR corresponds with the few-shot learning experiments conducted by McDermott et al, 23 which found that MTL pretraining preserved performance at subsampling rates as low as 0.1% of training data.…”
Section: Discussionsupporting
confidence: 54%
“…Besides the low-label scenario, there is also the issue of negative transfer across EHR prediction tasks, which was reported by McDermott et al 23 for most common MIMIC-III endpoints. Our results corroborate these findings, demonstrating that SB MTL tends to show a performance drop relative to ST learning.…”
Section: Discussionmentioning
confidence: 94%
See 3 more Smart Citations