2021
DOI: 10.48550/arxiv.2112.11944
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Continual learning of longitudinal health records

Abstract: Continual learning denotes machine learning methods which can adapt to new environments while retaining and reusing knowledge gained from past experiences. Such methods address two issues encountered by models in non-stationary environments: ungeneralisability to new data, and the catastrophic forgetting of previous knowledge when retrained. This is a pervasive problem in clinical settings where patient data exhibits covariate shift not only between populations, but also continuously over time. However, while … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
1
1
1

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(2 citation statements)
references
References 21 publications
(31 reference statements)
0
2
0
Order By: Relevance
“…However, instead of incremental learning, these studies considered domain adaptation for handling distribution shifts. In [31], the authors benchmarked state-ofthe-art domain-incremental learning algorithms for longitudinal electronic health records or multivariate clinical timeseries data and found that replay-based domain-incremental algorithms outperform the other counterparts. On the similar lines, Kiyasseh et al [32] had explored replay-based domainincremental learning on physiological signals earlier.…”
Section: A Incremental Learningmentioning
confidence: 99%
“…However, instead of incremental learning, these studies considered domain adaptation for handling distribution shifts. In [31], the authors benchmarked state-ofthe-art domain-incremental learning algorithms for longitudinal electronic health records or multivariate clinical timeseries data and found that replay-based domain-incremental algorithms outperform the other counterparts. On the similar lines, Kiyasseh et al [32] had explored replay-based domainincremental learning on physiological signals earlier.…”
Section: A Incremental Learningmentioning
confidence: 99%
“…Separate GAN models were trained on each of the silos, and then were later combined in a central GAN model [62]. We expect future works to investigate the feasibility and introduce new ways to implement GAN models on datasets from different institutions, and explore new applications such as federated and continual learning [178] .…”
Section: Generation Of Ehrs From Multimodal Data and Multi-centersmentioning
confidence: 99%