2022
DOI: 10.1109/tmi.2022.3193146
|View full text |Cite
|
Sign up to set email alerts
|

AADG: Automatic Augmentation for Domain Generalization on Retinal Image Segmentation

Abstract: Convolutional neural networks have been widely applied to medical image segmentation and have achieved considerable performance. However, the performance may be significantly affected by the domain gap between training data (source domain) and testing data (target domain). To address this issue, we propose a data manipulation based domain generalization method, called Automated Augmentation for Domain Generalization (AADG). Our AADG framework can effectively sample data augmentation policies that generate nove… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
3
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
5
3

Relationship

1
7

Authors

Journals

citations
Cited by 29 publications
(4 citation statements)
references
References 51 publications
0
3
0
Order By: Relevance
“…Due to the well-understood differences between domains in medical imaging, data augmentation remains a strong contender [5]. Most methods either utilize a large number of transformations [29] or introduce targeted augmentations that mimic domain differences, with handcrafted or adversarial augmentation [9]. Other methods have been introduced that rely on regularizing deep learning models, either by regularizing the feature space itself (e.g., linear dependency regularization [30] ) or by regularizing the model's parameters, done through explicit means [4] or meta-learning [31].…”
Section: ) Domain Generalization Methods For Medical Imagingmentioning
confidence: 99%
See 1 more Smart Citation
“…Due to the well-understood differences between domains in medical imaging, data augmentation remains a strong contender [5]. Most methods either utilize a large number of transformations [29] or introduce targeted augmentations that mimic domain differences, with handcrafted or adversarial augmentation [9]. Other methods have been introduced that rely on regularizing deep learning models, either by regularizing the feature space itself (e.g., linear dependency regularization [30] ) or by regularizing the model's parameters, done through explicit means [4] or meta-learning [31].…”
Section: ) Domain Generalization Methods For Medical Imagingmentioning
confidence: 99%
“…While there are many methods for domain generalization, most operate by attempting to address the first point of failure of generalization: Using domain specific representations [4], [5]. This can be addressed explicitly, with something as simple as data augmentation [6] that mimics the distributional shift between domains, or it can be learnt explicitly with adversarial learning [7]- [9], which directly optimizes a neural network to remove all domain identifiable information from the feature representations. In a similar vein, there are methods that use disentangled representations [10], [11] (like those used for style transfer networks) that separate domain variant and domain invariant representations, which are often learnt adversarially as well.…”
Section: ) Domain Generalization Mechanismsmentioning
confidence: 99%
“…The results evidently showed the robustness of DL with respect to ultra-high field MRIs across scanning acquisition protocols. The performance of deep learning models is largely influenced by the training datasets (Zhang et al, 2020;Guan et al, 2021;Lyu et al, 2022). In order to achieve optimal performance, it is crucial for testing images to follow a similar distribution to that of the training images.…”
Section: Figurementioning
confidence: 99%
“…The work in [6] addressed the domain gap between training and testing data by introducing a novel approach for domain generalization in medical picture segmentation. In order to align retinal Optical Coherence Tomography (OCT) B-scans and account for stochastic shifts and orientation changes in retinal layers, the researchers suggested a preprocessing step in [7].…”
Section: Review Of Existing Modelsmentioning
confidence: 99%