Simultaneous reconstruction of activity and attenuation using the maximum-likelihood reconstruction of activity and attenuation (MLAA) augmented by time-of-flight information is a promising method for PET attenuation correction. However, it still suffers from several problems, including crosstalk artifacts, slow convergence speed, and noisy attenuation maps (μ-maps). In this work, we developed deep convolutional neural networks (CNNs) to overcome these MLAA limitations, and we verified their feasibility using a clinical brain PET dataset. We applied the proposed method to one of the most challenging PET cases for simultaneous image reconstruction (F-fluorinated--3-fluoropropyl-2-β-carboxymethoxy-3-β-(4-iodophenyl)nortropane [F-FP-CIT] PET scans with highly specific binding to striatum of the brain). Three different CNN architectures (convolutional autoencoder [CAE], Unet, and Hybrid of CAE) were designed and trained to learn a CT-derived μ-map (μ-CT) from the MLAA-generated activity distribution and μ-map (μ-MLAA). The PET/CT data of 40 patients with suspected Parkinson disease were used for 5-fold cross-validation. For the training of CNNs, 800,000 transverse PET and CT slices augmented from 32 patient datasets were used. The similarity to μ-CT of the CNN-generated μ-maps (μ-CAE, μ-Unet, and μ-Hybrid) and μ-MLAA was compared using Dice similarity coefficients. In addition, we compared the activity concentration of specific (striatum) and nonspecific (cerebellum and occipital cortex) binding regions and the binding ratios in the striatum in the PET activity images reconstructed using those μ-maps. The CNNs generated less noisy and more uniform μ-maps than the original μ-MLAA. Moreover, the air cavities and bones were better resolved in the proposed CNN outputs. In addition, the proposed deep learning approach was useful for mitigating the crosstalk problem in the MLAA reconstruction. The Hybrid network of CAE and Unet yielded the most similar μ-maps to μ-CT (Dice similarity coefficient in the whole head = 0.79 in the bone and 0.72 in air cavities), resulting in only about a 5% error in activity and binding ratio quantification. The proposed deep learning approach is promising for accurate attenuation correction of activity distribution in time-of-flight PET systems.
We propose a new deep learning-based approach to provide more accurate whole-body PET/MRI attenuation correction than is possible with the Dixon-based 4-segment method. We use activity and attenuation maps estimated using the maximum-likelihood reconstruction of activity and attenuation (MLAA) algorithm as inputs to a convolutional neural network (CNN) to learn a CT-derived attenuation map. Methods: The whole-body 18 F-FDG PET/CT scan data of 100 cancer patients (38 men and 62 women; age, 57.3 ± 14.1 y) were retrospectively used for training and testing the CNN. A modified U-net was trained to predict a CT-derived μ-map (μ-CT) from the MLAA-generated activity distribution (l-MLAA) and μ-map (μ-MLAA). We used 1.3 million patches derived from 60 patients' data for training the CNN, data of 20 others were used as a validation set to prevent overfitting, and the data of the other 20 were used as a test set for the CNN performance analysis. The attenuation maps generated using the proposed method (μ-CNN), μ-MLAA, and 4-segment method (μ-segment) were compared with the μ-CT, a ground truth. We also compared the voxelwise correlation between the activity images reconstructed using ordered-subset expectation maximization with the μ-maps, and the SUVs of primary and metastatic bone lesions obtained by drawing regions of interest on the activity images. Results: The CNN generates less noisy attenuation maps and achieves better bone identification than MLAA. The average Dice similarity coefficient for bone regions between μ-CNN and μ-CT was 0.77, which was significantly higher than that between μ-MLAA and μ-CT (0.36). Also, the CNN result showed the best pixel-by-pixel correlation with the CT-based results and remarkably reduced differences in activity maps in comparison to CT-based attenuation correction. Conclusion: The proposed deep neural network produced a more reliable attenuation map for 511-keV photons than the 4-segment method currently used in whole-body PET/MRI studies. http://jnm.snmjournals.org/content/60/8/1183This article and updated information are available at: http://jnm.snmjournals.org/site/subscriptions/online.xhtml Information about subscriptions to JNM can be found at: http://jnm.snmjournals.org/site/misc/permission.xhtml
The objective of this study is to develop a convolutional neural network (CNN) for computed tomography (CT) image super-resolution. The network learns an end-to-end mapping between low (thick-slice thickness) and high (thin-slice thickness) resolution images using the modified U-Net. To verify the proposed method, we train and test the CNN using axially averaged data of existing thin-slice CT images as input and their middle slice as the label. Fifty-two CT studies are used as the CNN training set, and 13 CT studies are used as the test set. We perform five-fold cross-validation to confirm the performance consistency. Because all input and output images are used in two-dimensional slice format, the total number of slices for training the CNN is 7670. We assess the performance of the proposed method with respect to the resolution and contrast, as well as the noise properties. The CNN generates output images that are virtually equivalent to the ground truth. The most remarkable image-recovery improvement by the CNN is deblurring of boundaries of bone structures and air cavities. The CNN output yields an approximately 10% higher peak signal-to-noise ratio and lower normalized root mean square error than the input (thicker slices). The CNN output noise level is lower than the ground truth and equivalent to the iterative image reconstruction result. The proposed deep learning method is useful for both super-resolution and de-noising.
Personalized dosimetry with high accuracy is crucial owing to the growing interests in personalized medicine. The direct Monte Carlo simulation is considered as a state-of-art voxel-based dosimetry technique; however, it incurs an excessive computational cost and time. To overcome the limitations of the direct Monte Carlo approach, we propose using a deep convolutional neural network (CNN) for the voxel dose prediction. PET and CT image patches were used as inputs for the CNN with the given ground truth from direct Monte Carlo. The predicted voxel dose rate maps from the CNN were compared with the ground truth and dose rate maps generated voxel S-value (VSV) kernel convolution method, which is one of the common voxel-based dosimetry techniques. The CNN-based dose rate map agreed well with the ground truth with voxel dose rate errors of 2.54% ± 2.09%. The VSV kernel approach showed a voxel error of 9.97% ± 1.79%. In the whole-body dosimetry study, the average organ absorbed dose errors were 1.07%, 9.43%, and 34.22% for the CNN, VSV, and OLINDA/EXM dosimetry software, respectively. The proposed CNN-based dosimetry method showed improvements compared to the conventional dosimetry approaches and showed results comparable with that of the direct Monte Carlo simulation with significantly lower calculation time.
No abstract
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.