2023
DOI: 10.1109/jtehm.2022.3221918
|View full text |Cite
|
Sign up to set email alerts
|

Tumor-Attentive Segmentation-Guided GAN for Synthesizing Breast Contrast-Enhanced MRI Without Contrast Agents

Abstract: Objective: Breast dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) is a sensitive imaging technique critical for breast cancer diagnosis. However, the administration of contrast agents poses a potential risk. This can be avoided if contrast-enhanced MRI can be obtained without using contrast agents. Thus, we aimed to generate T1-weighted contrast-enhanced MRI (ceT1) images from pre-contrast T1 weighted MRI (preT1) images in the breast. Methods: We proposed a generative adversarial network to synt… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
4
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
5
2
2

Relationship

0
9

Authors

Journals

citations
Cited by 15 publications
(8 citation statements)
references
References 39 publications
(40 reference statements)
0
4
0
Order By: Relevance
“…1 GANs have also been applied in the MRI modality, where segmentation-guided GANs are proposed to synthesize T-weighted contrast-enhanced MRI images from pre-contrast T1 weighted MRI images. 2 However, the majority of these studies solely depended on visual assessments or generic metrics to evaluate the quality of the synthesized images, lacking quantitative analyses of the specific effects for targeted clinical tasks. The primary metrics employed in the studies to assess the quality of synthesized images include normalized mean squared error (NRMSE), Pearson cross-correlation coefficients (CC), peak signal-to-noise ratio (PSNR), and structural similarity index map (SSIM) for both tumor and the entire breast area.…”
Section: Purposementioning
confidence: 99%
“…1 GANs have also been applied in the MRI modality, where segmentation-guided GANs are proposed to synthesize T-weighted contrast-enhanced MRI images from pre-contrast T1 weighted MRI images. 2 However, the majority of these studies solely depended on visual assessments or generic metrics to evaluate the quality of the synthesized images, lacking quantitative analyses of the specific effects for targeted clinical tasks. The primary metrics employed in the studies to assess the quality of synthesized images include normalized mean squared error (NRMSE), Pearson cross-correlation coefficients (CC), peak signal-to-noise ratio (PSNR), and structural similarity index map (SSIM) for both tumor and the entire breast area.…”
Section: Purposementioning
confidence: 99%
“…8,9,11 With recent advances in deep learning, training deep generative models to generate synthetic contrastenhanced imaging data as an alternative to contrast agent administration has been becoming a promising field of research. 12,13 For instance, Kim et al 14 provide a tumour-attentive segmentation-guided generative adversarial network (GAN) 15 that generates a contrast-enhanced T1 breast MRI image from its pre-contrast counterpart, while being guided by the predictions of a surrogate segmentation network. Similarly, Zhao et al 16 propose Tripartite-GAN to synthesise contrast-enhanced from non contrast-enhanced liver MRI with a chained tumour detection model.…”
Section: Introductionmentioning
confidence: 99%
“…Given the novelty of this research field, it is expected that a high variability is yet observed in the literature with regards to the choice of input data used to train vCE imaging neural networks. T1-weighted (T1w) sequences (20,21,23) have been used for vCE breast MRI as well as combinations of T1w and T2-weighted (T2w) image acquisitions (22) or more complex protocols including diffusion-weighted imaging (DWI)(26).…”
Section: Introductionmentioning
confidence: 99%
“…The generation of virtual contrast-enhanced (vCE) MRI scans from unenhanced acquisitions using deep learning was first reported in brain studies (14)(15)(16)(17)(18)(19). However, several recent publications (20)(21)(22)(23)(24)(25)(26) have shown the technical feasibility of such approach for breast imaging. Given the novelty of this research field, it is expected that a high variability is yet observed in the literature with regards to the choice of input data used to train vCE imaging neural networks.…”
Section: Introductionmentioning
confidence: 99%