2021
DOI: 10.21203/rs.3.rs-1087025/v1
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Addressing catastrophic forgetting for medical domain expansion

Abstract: Model brittleness is a key concern when deploying deep learning models in real-world medical settings. A model that has high performance at one dataset may suffer a significant decline in performance when tested at on different datasets. While pooling datasets from multiple hospitals and re-training may provide a straightforward solution, it is often infeasible and may compromise patient privacy. An alternative approach is to fine-tune the model on subsequent datasets after training on the original dataset. No… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
1
1
1

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(1 citation statement)
references
References 36 publications
0
1
0
Order By: Relevance
“…All the selected architectures are summarized in Table I along with the type of architecture and number of parameters in the network. Furthermore, we also evaluate different normalization layers, as recent work has shown that they also significantly contribute to catastrophic forgetting [34], [35]. Specifically, we exchange the BatchNorm Layers of ResNet-50 with Continual- [36], Group- [37], Instance- [38], Layer and Batch Re-Normalization layers [39] and evaluate the resulting networks in the two incremental scenarios.…”
Section: A Experimental Setupmentioning
confidence: 99%
“…All the selected architectures are summarized in Table I along with the type of architecture and number of parameters in the network. Furthermore, we also evaluate different normalization layers, as recent work has shown that they also significantly contribute to catastrophic forgetting [34], [35]. Specifically, we exchange the BatchNorm Layers of ResNet-50 with Continual- [36], Group- [37], Instance- [38], Layer and Batch Re-Normalization layers [39] and evaluate the resulting networks in the two incremental scenarios.…”
Section: A Experimental Setupmentioning
confidence: 99%