2023
DOI: 10.3390/cancers15072017
|View full text |Cite
|
Sign up to set email alerts
|

CBCT-to-CT Translation Using Registration-Based Generative Adversarial Networks in Patients with Head and Neck Cancer

Abstract: Recently, deep learning with generative adversarial networks (GANs) has been applied in multi-domain image-to-image translation. This study aims to improve the image quality of cone-beam computed tomography (CBCT) by generating synthetic CT (sCT) that maintains the patient’s anatomy as in CBCT, while having the image quality of CT. As CBCT and CT are acquired at different time points, it is challenging to obtain paired images with aligned anatomy for supervised training. To address this limitation, the study i… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
1
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
8

Relationship

0
8

Authors

Journals

citations
Cited by 11 publications
(3 citation statements)
references
References 47 publications
(80 reference statements)
0
1
0
Order By: Relevance
“…These anatomical variations could potentially impact the training of the U-Net algorithm, which relies on paired image data. Therefore, exploring unsupervised models such as GANs could enhance the model's performance, particularly when dealing with unpaired image data (intra-individual co-registration) [18,[24][25][26][27][28][29][30]. Furthermore, the process of rigidly registering the CT and CBCT images might not have been adequate in establishing the required image similarity for network training.…”
Section: Discussionmentioning
confidence: 99%
“…These anatomical variations could potentially impact the training of the U-Net algorithm, which relies on paired image data. Therefore, exploring unsupervised models such as GANs could enhance the model's performance, particularly when dealing with unpaired image data (intra-individual co-registration) [18,[24][25][26][27][28][29][30]. Furthermore, the process of rigidly registering the CT and CBCT images might not have been adequate in establishing the required image similarity for network training.…”
Section: Discussionmentioning
confidence: 99%
“…The suggested approach utilizes image-to-image translation to facilitate registration and incorporates a geometry-consistent training scheme and a multi-scale registration network with partial sharing. Suwanraksa et al [61] Proposed a GAN with a registration network (RegNet) to improve the quality of synthetic CT (sCT) generated from Cone-Beam Computed Tomography (CBCT). The incorporation of RegNet led to reduced errors, improved image metrics, and sCT images maintaining anatomical accuracy compared to GANs without RegNet .…”
Section: Ref Description Liu Et Al [60]mentioning
confidence: 99%
“…Hardware-based methods attempt to decrease the influence caused by scattering by utilizing an anti-scatter grid when acquiring CBCT images [15,16]. Image post-processing methods mainly consist of deformation of pCT [17][18][19][20][21], an estimation of scatter kernels [22], Monte Carlo simulations of scatter distribution [23,24], histogram matching [25], and deep learning-based methods [26][27][28][29][30][31]. Deformation of pCT is one of the commonly used methods, which is based on deformable registration between pCT and CBCT.…”
Section: Introductionmentioning
confidence: 99%