2021
DOI: 10.1002/acm2.13327
|View full text |Cite
|
Sign up to set email alerts
|

Deep learning‐based synthetic CT generation for MR‐only radiotherapy of prostate cancer patients with 0.35T MRI linear accelerator

Abstract: This is an open access article under the terms of the Creat ive Commo ns Attri bution License, which permits use, distribution and reproduction in any medium, provided the original work is properly cited.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
10
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
1

Relationship

0
6

Authors

Journals

citations
Cited by 16 publications
(11 citation statements)
references
References 24 publications
1
10
0
Order By: Relevance
“…One study is excluded due to a different MAE calculation approach [28] . The monotonic improvement confirms and extends previous results limited to a smaller number of patients [27] . The bone reproduction quality also improves ( Fig.…”
Section: Discussionsupporting
confidence: 91%
See 1 more Smart Citation
“…One study is excluded due to a different MAE calculation approach [28] . The monotonic improvement confirms and extends previous results limited to a smaller number of patients [27] . The bone reproduction quality also improves ( Fig.…”
Section: Discussionsupporting
confidence: 91%
“…The abdomen was analysed only marginally [20] , [23] , [24] , [25] , [26] and the vast majority (94.5 %) of results were reported from MR images acquired with 1.5 T or higher fields. To date, a few additional investigations were conducted with fields below 1 T [27] , [28] , [29] . The sCT generation task in the abdomen at low field was investigated only in four studies with U-Net [28] , Pix2pix [20] and a combination of multiple networks including CycleGAN [24] , [30] .…”
Section: Introductionmentioning
confidence: 99%
“…Here, we developed 2-D and 3-D CNNs for this purpose and used the retrospective image registration evaluation (RIRE) dataset (this dataset can be downloaded from: http://www.insight-journal.org/rire/download_data.php, accessed on 1 October 2021) containing the 3-D MRI and CT pairs of 16 patients to train them [29]. We used a modified U-Net [30,31] architecture, as it has been shown to perform well in similar tasks [27,[32][33][34][35]. Prior to training, we co-registered each image pair using an existing multi-modal image registration method [36].…”
Section: Synthetic Ct Generationmentioning
confidence: 99%
“…In addition to fully CNN, there are other networks architectures that have been applied to learn the mapping from MR to CT images to generate pseudo CT with continuous values such as generative adversarial network [5][6][7][8][9], U-Net [10], residual U-Net [11], and HighRes3DNet [12].…”
Section: Related Workmentioning
confidence: 99%
“…However, the CT scan exposes the patient to radiation dose and generates images with low soft tissue contrast [4]. Recently, various learning based methods using deep learning have been proposed to learn the complex mapping from the tissue details of MR images to CT images in the same patients [5][6][7][8][9][10][11][12]. Another way to generate pseudo CT images is to segment MR images into different tissue classes.…”
Section: Introductionmentioning
confidence: 99%