2019
DOI: 10.1007/978-3-030-32692-0_55
|View full text |Cite
|
Sign up to set email alerts
|

Communal Domain Learning for Registration in Drifted Image Spaces

Abstract: Designing a registration framework for images that do not share the same probability distribution is a major challenge in modern image analytics yet trivial task for the human visual system (HVS). Discrepancies in probability distributions, also known as drifts, can occur due to various reasons including, but not limited to differences in sequences and modalities (e.g., MRI T1-T2 and MRI-CT registration), or acquisition settings (e.g., multisite, inter-subject, or intrasubject registrations). The popular assum… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2021
2021
2022
2022

Publication Types

Select...
1
1

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(2 citation statements)
references
References 10 publications
0
2
0
Order By: Relevance
“…Among the surveyed 31 symmetric approaches, direct approaches operated on the feature representations across domains by minimizing their differences (via mutual information [ 63 ], maximum mean discrepancy [ 46 , 49 , 64 ], Euclidean distance [ 65 , 66 , 67 , 68 , 69 , 70 , 71 ], Wasserstein distance [ 72 ], and average likelihood [ 73 ]), maximizing their correlation [ 74 , 75 ] or covariance [ 36 ], and introducing sparsity with L1/L2 norms [ 42 , 76 ]. On the other hand, indirect approaches were applied via adversarial training [ 28 , 41 , 54 , 77 , 78 , 79 , 80 , 81 , 82 , 83 , 84 , 85 ], and knowledge distillation [ 86 ].…”
Section: Resultsmentioning
confidence: 99%
“…Among the surveyed 31 symmetric approaches, direct approaches operated on the feature representations across domains by minimizing their differences (via mutual information [ 63 ], maximum mean discrepancy [ 46 , 49 , 64 ], Euclidean distance [ 65 , 66 , 67 , 68 , 69 , 70 , 71 ], Wasserstein distance [ 72 ], and average likelihood [ 73 ]), maximizing their correlation [ 74 , 75 ] or covariance [ 36 ], and introducing sparsity with L1/L2 norms [ 42 , 76 ]. On the other hand, indirect approaches were applied via adversarial training [ 28 , 41 , 54 , 77 , 78 , 79 , 80 , 81 , 82 , 83 , 84 , 85 ], and knowledge distillation [ 86 ].…”
Section: Resultsmentioning
confidence: 99%
“…For example, Dinsdale et al [45] proposed a method based on adversarial training unlearning information about the scanner sites enabling multi-scanner integration and generalization with a single classifier across data sets. Other works performed such an harmonization on the feature level by minimizing their distances via different metrics such as mutual information [52] or maximizing their correlation [53,54]. The results of this work show that a harmonization on the feature level is not necessarily needed if the differences across data sets are small (e.g.…”
Section: Comparison With Other Transfer Learning Approaches 441 Featu...mentioning
confidence: 91%