2021
DOI: 10.1016/j.neuroimage.2020.117689
|View full text |Cite
|
Sign up to set email alerts
|

Deep learning-based unlearning of dataset bias for MRI harmonisation and confound removal

Abstract: Highlights We demonstrate a flexible deep-learning-based harmonisation framework. Applied to age prediction and segmentation tasks in a range of datasets. Scanner information is removed, maintaining performance and improving generalisability. The framework can be used with any feedforward network architecture. It successfully removes additional confounds and works with varied distributions.

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
75
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
7
1
1

Relationship

1
8

Authors

Journals

citations
Cited by 103 publications
(75 citation statements)
references
References 46 publications
(55 reference statements)
0
75
0
Order By: Relevance
“…Also, although PAC 2019 provides a true measurement for generalisability of models to unseen data (because the test set labels are hidden from the participants), this does not guarantee the generalisability to unseen scanning site (because the test set follows the same site and age distribution as the training set). For applications requiring site generalisability, see recent work aiming to address this specific issue ( 33 ).…”
Section: Discussionmentioning
confidence: 99%
“…Also, although PAC 2019 provides a true measurement for generalisability of models to unseen data (because the test set labels are hidden from the participants), this does not guarantee the generalisability to unseen scanning site (because the test set follows the same site and age distribution as the training set). For applications requiring site generalisability, see recent work aiming to address this specific issue ( 33 ).…”
Section: Discussionmentioning
confidence: 99%
“…DANNs use a label predictor and a domain classifier to optimize the features to make the learned features discriminative for the main task but non-discriminative between the domains. Adapting the same framework as proposed in [186], Dinsdale et al [187] utilized an iterative update approach that aimed to Studies by Rozantsev et al [181] and Sun and Saenko [182] have adapted divergencebased approaches for domain adaptation by using a two-stream CNN architecture (one in the source domain with synthetic images and the other in the target domain with real images) with unshared weights and the DeepCORAL [183] architecture, respectively. Their methodologies provided a domain-invariant representation by trying to reduce the divergence (reduce the gap/distance) between feature distributions of source and target data distributions (both use non-medical images).…”
Section: Normalization Using Deep Learningmentioning
confidence: 99%
“…The figures show example slices from the 3 canonical views per scan, as discussed above. (15,15,15) and (25,25,25).…”
Section: Image Pre-processingmentioning
confidence: 99%
“…Figure 2. 2D slice representation showing y-coronal, x-sagittal, and z-axial views of the brain (MNI template, OASIS scan) when for slices intersecting at(15,15,15) and(25,25,25).…”
mentioning
confidence: 99%