2021
DOI: 10.1016/j.neuroimage.2021.118569
|View full text |Cite
|
Sign up to set email alerts
|

Unsupervised MR harmonization by learning disentangled representations using information bottleneck theory

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
5

Citation Types

1
65
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
3
2
2

Relationship

2
5

Authors

Journals

citations
Cited by 73 publications
(67 citation statements)
references
References 21 publications
1
65
0
Order By: Relevance
“…Direct translation of existing harmonisation methods into FL frameworks is non-trivial. Most deep learning methods for harmonisation are based on generative frameworks [3,12,26,27], and, although federated equivalents to GANs and VAEs are being developed [16,25], additional challenges exist for harmonisation approaches that require simultaneous access to source and target data [8]. Additionally, many methods require paired data -not possible with distributed data and unlikely to exist in large multisite studies.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…Direct translation of existing harmonisation methods into FL frameworks is non-trivial. Most deep learning methods for harmonisation are based on generative frameworks [3,12,26,27], and, although federated equivalents to GANs and VAEs are being developed [16,25], additional challenges exist for harmonisation approaches that require simultaneous access to source and target data [8]. Additionally, many methods require paired data -not possible with distributed data and unlikely to exist in large multisite studies.…”
Section: Introductionmentioning
confidence: 99%
“…Additionally, many methods require paired data -not possible with distributed data and unlikely to exist in large multisite studies. Further, most generative methods are data-hungry [3,27], which casts into doubt whether sufficient data would be available at local sites.…”
Section: Introductionmentioning
confidence: 99%
“…The recent development of disentangled representation learning benefits various medical image analysis tasks including segmentation [17,22,10], quality assessment [12,25], domain adaptation [26], and image-to-image translation (I2I) [9,29,30]. The underlying assumption of disentanglement is that a high-dimensional observation x is generated by a latent variable z, where z can be decomposed into independent factors with each factor capturing a certain type of variation of x, i.e., the probability density functions satisfy p(z 1 , z 2 ) = p(z 1 )p(z 2 ) and z = (z 1 , z 2 ) [19].…”
Section: Introductionmentioning
confidence: 99%
“…The underlying assumption of disentanglement is that a high-dimensional observation x is generated by a latent variable z, where z can be decomposed into independent factors with each factor capturing a certain type of variation of x, i.e., the probability density functions satisfy p(z 1 , z 2 ) = p(z 1 )p(z 2 ) and z = (z 1 , z 2 ) [19]. For medical images, it is commonly assumed that z is a composition of contrast (i.e., acquisition-related) and anatomical information of image x [7,9,30,22,17]. While the contrast representations capture specific information about the imaging modality, acquisition parameters, and cohort, the anatomical representations are generally assumed to be invariant to image domains 4 .…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation