Introduction: There has been a rapid development of deep learning (DL) models for medical imaging. However, DL requires a large labeled dataset for training the models. Getting large-scale labeled data remains a challenge, and multi-center datasets suffer from heterogeneity due to patient diversity and varying imaging protocols. Domain adaptation (DA) has been developed to transfer the knowledge from a labeled data domain to a related but unlabeled domain in either image space or feature space. DA is a type of transfer learning (TL) that can improve the performance of models when applied to multiple different datasets.
Objective: In this survey, we review the state-of-the-art DL-based DA methods for medical imaging. We aim to summarize recent advances, highlighting the motivation, challenges, and opportunities, and to discuss promising directions for future work in DA for medical imaging.
Methods: We surveyed peer-reviewed publications from leading biomedical journals and conferences between 2017-2020, that reported the use of DA in medical imaging applications, grouping them by methodology, image modality, and learning scenarios.
Results: We mainly focused on pathology and radiology as application areas. Among various DA approaches, we discussed domain transformation (DT) and latent feature-space transformation (LFST). We highlighted the role of unsupervised DA in image segmentation and described opportunities for future development.
Conclusion: DA has emerged as a promising solution to deal with the lack of annotated training data. Using adversarial techniques, unsupervised DA has achieved good performance, especially for segmentation tasks. Opportunities include domain transferability, multi-modal DA, and applications that benefit from synthetic data.
Stain normalization is a crucial pre-processing step for histopathological image processing, and can help improve the accuracy of downstream tasks such as segmentation and classification. To evaluate the effectiveness of stain normalization methods, various metrics based on colorperceptual similarity and stain color evaluation have been proposed. However, there still exists a huge gap between metric evaluation and human perception, given the limited explainability power of existing metrics and inability to combine color and semantic information efficiently. Inspired by the effectiveness of deep neural networks in evaluating perceptual similarity of natural images, in this paper, we propose TriNet-P, a color-perceptual similarity metric for whole slide images, based on deep metric embeddings. We evaluate the proposed approach using four publicly available breast cancer histological datasets. The benefit of our approach is its representation efficiency of the perceptual factors associated with H&E stained images with minimal human intervention. We show that our metric can capture the semantic similarities, both at subject (patient) and laboratory levels, and leads to better performance in image retrieval and clustering tasks.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.