2018
DOI: 10.1007/978-3-030-00536-8_4
|View full text |Cite
|
Sign up to set email alerts
|

Cross-Modality Image Synthesis from Unpaired Data Using CycleGAN

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
89
0

Year Published

2019
2019
2022
2022

Publication Types

Select...
8

Relationship

0
8

Authors

Journals

citations
Cited by 149 publications
(101 citation statements)
references
References 16 publications
0
89
0
Order By: Relevance
“…Adversarial loss functions have recently been demonstrated for various medical imaging applications with reliable capture of high-frequency texture information [28]- [48]. In the domain of cross-modality image synthesis, important applications include CT to PET synthesis [29], [40], MR to CT synthesis [28], [33], [38], [42], [48], CT to MR synthesis [36], and retinal vessel map to image synthesis [35], [41]. Inspired by this success, here we introduce conditional GAN models for synthesizing images of The pGAN method is based on a conditional adversarial network with a generator G, a pre-trained VGG16 network V, and a discriminator D. Given an input image in a source contrast (e.g., T 1 -weighted), G learns to generate the image of the same anatomy in a target contrast (e.g., T 2 -weighted).…”
Section: Introductionmentioning
confidence: 99%
“…Adversarial loss functions have recently been demonstrated for various medical imaging applications with reliable capture of high-frequency texture information [28]- [48]. In the domain of cross-modality image synthesis, important applications include CT to PET synthesis [29], [40], MR to CT synthesis [28], [33], [38], [42], [48], CT to MR synthesis [36], and retinal vessel map to image synthesis [35], [41]. Inspired by this success, here we introduce conditional GAN models for synthesizing images of The pGAN method is based on a conditional adversarial network with a generator G, a pre-trained VGG16 network V, and a discriminator D. Given an input image in a source contrast (e.g., T 1 -weighted), G learns to generate the image of the same anatomy in a target contrast (e.g., T 2 -weighted).…”
Section: Introductionmentioning
confidence: 99%
“…Applying this idea to the field of medical image synthesis, Nie et al enhanced the conditional GAN structure with an auto-context model and achieved good results in the task of tomographic CT image synthesis based on its MR pendant 6 . Furthermore, in 7,[25][26][27] , the successful training of a GAN based on unpaired MR and CT images was shown. As a result of the achieved successes, GANs are now a frequently used tool in medical imaging research that extends beyond the realm of image-to-image translation [28][29][30][31][32] .…”
Section: Image-to-image Translationmentioning
confidence: 99%
“…There are a few observations about GANs: (a) CycleGAN does not guarantee consistent translation of minor anatomical structures and boundaries [53], and thus needs additional constraints like gradient [53] and shape consistency [51]. For instance, Jiang et al, [54] incorporated tumor-shape and feature-based losses to preserve tumors while translating CT data to MRI data; (b) Attention networks can account for varying transferability of different image regions [55].…”
Section: Bidirectional Translationmentioning
confidence: 99%