2021
DOI: 10.1002/mp.14624
|View full text |Cite
|
Sign up to set email alerts
|

Improving CBCT quality to CT level using deep learning with generative adversarial network

Abstract: Purpose To improve image quality and computed tomography (CT) number accuracy of daily cone beam CT (CBCT) through a deep learning methodology with generative adversarial network. Methods One hundred and fifty paired pelvic CT and CBCT scans were used for model training and validation. An unsupervised deep learning method, 2.5D pixel‐to‐pixel generative adversarial network (GAN) model with feature mapping was proposed. A total of 12 000 slice pairs of CT and CBCT were used for model training, while ten‐fold cr… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

3
106
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
7

Relationship

0
7

Authors

Journals

citations
Cited by 77 publications
(110 citation statements)
references
References 31 publications
3
106
0
Order By: Relevance
“…83,[124][125][126][127] The remaining 10 proved the validity of the transformation with dosimetric studies for photons, 66,71,101,[128][129][130][131] protons, 120 and for both photons and protons. 85,132,133 Only three studies investigated unpaired training 84,128,133 ; in 11 cases, paired training was implemented by matching the CBCT and ground truth CT by rigid or deformable registration. In Eck et al, 66 however, CBCT and CT were not registered for the training phase, as the authors claimed the first fraction CBCT was geometrically close enough to the planning CT for the network.…”
Section: Cbct-to-ct Generationmentioning
confidence: 99%
“…83,[124][125][126][127] The remaining 10 proved the validity of the transformation with dosimetric studies for photons, 66,71,101,[128][129][130][131] protons, 120 and for both photons and protons. 85,132,133 Only three studies investigated unpaired training 84,128,133 ; in 11 cases, paired training was implemented by matching the CBCT and ground truth CT by rigid or deformable registration. In Eck et al, 66 however, CBCT and CT were not registered for the training phase, as the authors claimed the first fraction CBCT was geometrically close enough to the planning CT for the network.…”
Section: Cbct-to-ct Generationmentioning
confidence: 99%
“…RCT720 took approximately 30 s from the start of the patient scan to image reconstruction and provided resultant images. This was expected to be faster than deep learning techniques [20,21], which perform massive computation through complex neural networks after patient scan and image reconstruction. Furthermore, we expect that it would be more convenient and familiar to assess bone quality with the voxel values of CBCT images than with bone density values [21].…”
Section: Discussionmentioning
confidence: 99%
“…These physical limitations result in inaccurate voxel values and bone density assessments [18]. In the field of radiation therapy, some attempts to utilize deep learning techniques, such as the generative adversarial network (GAN), to correct CBCT artifacts, and to improve image quality of CBCT images were made [19,20]. A feasibility study was conducted to measure bone density directly and quantitatively from CBCT images using modified GAN [21].…”
Section: Introductionmentioning
confidence: 99%
“… 11 , 24 , 25 A previous study using pelvic computed tomography images showed that pix2pix outperformed U-Net and other GAN techniques. 12 In this manner, we performed an experiment using U-Net based on pix2pix for pathological lesion segmentation in fundus photography.…”
Section: Methodsmentioning
confidence: 99%
“… 10 Because the single U-Net model does not consider the detailed features of the output images, generative adversarial network (GAN) framework can improve the performance of the U-Net model. 11 , 12 In GAN architecture, the generator and discriminator operate as adversaries to synthesize more realistic output images. 13 Pix2pix is a popular GAN technique using U-Net for image-to-image translation.…”
Section: Introductionmentioning
confidence: 99%