2019
DOI: 10.1007/978-3-030-33391-1_8
|View full text |Cite
|
Sign up to set email alerts
|

Cross-Modality Knowledge Transfer for Prostate Segmentation from CT Scans

Abstract: Creating large scale high-quality annotations is a known challenge in medical imaging. In this work, based on the CycleGAN algorithm, we propose leveraging annotations from one modality to be useful in other modalities. More specifically, the proposed algorithm creates highly realistic synthetic CT images (SynCT) from prostate MR images using unpaired data sets. By using SynCT images (without segmentation labels) and MR images (with segmentation labels available), we have trained a deep segmentation network fo… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
3
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
4
4
2

Relationship

0
10

Authors

Journals

citations
Cited by 15 publications
(5 citation statements)
references
References 13 publications
0
3
0
Order By: Relevance
“…These datasets were resampled to the same target spacing (2, 2, 2) and embedded into a 256 × 256 × 256 3D volumetric space [35]. After normalizing and window leveling [−200, 250] [36][37][38][39], to enhance the contrast and texture of soft tissue, the foreground of input voxels was selected from the background by an intersection with mask voxels images using MATLAB R2022a. To increase the amount of data for training the network, we augmented the CT images…”
Section: Patient Cohorts and Data Pre-processingmentioning
confidence: 99%
“…These datasets were resampled to the same target spacing (2, 2, 2) and embedded into a 256 × 256 × 256 3D volumetric space [35]. After normalizing and window leveling [−200, 250] [36][37][38][39], to enhance the contrast and texture of soft tissue, the foreground of input voxels was selected from the background by an intersection with mask voxels images using MATLAB R2022a. To increase the amount of data for training the network, we augmented the CT images…”
Section: Patient Cohorts and Data Pre-processingmentioning
confidence: 99%
“…Incorporating data augmentation and SSIM as cycle-consistency loss they manage to achieve a difference in Dice coefficient between their synthetic CT images and real CT images of 0.07. 11 Similar works on cross-modality transfer using CycleGAN for segmentation where done in area of lung tumor segmentation 12 and segmentation of the parotid glands. 13 One great difference between the mentioned imagetranslation methods and GIN augmentation is, that GIN augmentation removes the need to train a whole new deep learning model and can be employed directly as a data augmentation method for the segmentation model.…”
Section: Introductionmentioning
confidence: 99%
“…Ge et al [35] further integrated structural constraints into CycleGAN to overcome anatomical discrepancies through mutual information and L1 loss between original MR and synthetic CT images. Liu et al [36] refined this approach by incorporating a cost function based on the structural similarity index to generate synthetic CT images from prostate MR images. From CT to MR domain, Dong et al [37] applied CycleGAN to generate synthetic MR to improve the segmentation of multiple organs in pelvic CT images.…”
Section: Introductionmentioning
confidence: 99%