Simultaneous reconstruction of activity and attenuation using the maximum-likelihood reconstruction of activity and attenuation (MLAA) augmented by time-of-flight information is a promising method for PET attenuation correction. However, it still suffers from several problems, including crosstalk artifacts, slow convergence speed, and noisy attenuation maps (μ-maps). In this work, we developed deep convolutional neural networks (CNNs) to overcome these MLAA limitations, and we verified their feasibility using a clinical brain PET dataset. We applied the proposed method to one of the most challenging PET cases for simultaneous image reconstruction (F-fluorinated--3-fluoropropyl-2-β-carboxymethoxy-3-β-(4-iodophenyl)nortropane [F-FP-CIT] PET scans with highly specific binding to striatum of the brain). Three different CNN architectures (convolutional autoencoder [CAE], Unet, and Hybrid of CAE) were designed and trained to learn a CT-derived μ-map (μ-CT) from the MLAA-generated activity distribution and μ-map (μ-MLAA). The PET/CT data of 40 patients with suspected Parkinson disease were used for 5-fold cross-validation. For the training of CNNs, 800,000 transverse PET and CT slices augmented from 32 patient datasets were used. The similarity to μ-CT of the CNN-generated μ-maps (μ-CAE, μ-Unet, and μ-Hybrid) and μ-MLAA was compared using Dice similarity coefficients. In addition, we compared the activity concentration of specific (striatum) and nonspecific (cerebellum and occipital cortex) binding regions and the binding ratios in the striatum in the PET activity images reconstructed using those μ-maps. The CNNs generated less noisy and more uniform μ-maps than the original μ-MLAA. Moreover, the air cavities and bones were better resolved in the proposed CNN outputs. In addition, the proposed deep learning approach was useful for mitigating the crosstalk problem in the MLAA reconstruction. The Hybrid network of CAE and Unet yielded the most similar μ-maps to μ-CT (Dice similarity coefficient in the whole head = 0.79 in the bone and 0.72 in air cavities), resulting in only about a 5% error in activity and binding ratio quantification. The proposed deep learning approach is promising for accurate attenuation correction of activity distribution in time-of-flight PET systems.
We propose a new deep learning-based approach to provide more accurate whole-body PET/MRI attenuation correction than is possible with the Dixon-based 4-segment method. We use activity and attenuation maps estimated using the maximum-likelihood reconstruction of activity and attenuation (MLAA) algorithm as inputs to a convolutional neural network (CNN) to learn a CT-derived attenuation map. Methods: The whole-body 18 F-FDG PET/CT scan data of 100 cancer patients (38 men and 62 women; age, 57.3 ± 14.1 y) were retrospectively used for training and testing the CNN. A modified U-net was trained to predict a CT-derived μ-map (μ-CT) from the MLAA-generated activity distribution (l-MLAA) and μ-map (μ-MLAA). We used 1.3 million patches derived from 60 patients' data for training the CNN, data of 20 others were used as a validation set to prevent overfitting, and the data of the other 20 were used as a test set for the CNN performance analysis. The attenuation maps generated using the proposed method (μ-CNN), μ-MLAA, and 4-segment method (μ-segment) were compared with the μ-CT, a ground truth. We also compared the voxelwise correlation between the activity images reconstructed using ordered-subset expectation maximization with the μ-maps, and the SUVs of primary and metastatic bone lesions obtained by drawing regions of interest on the activity images. Results: The CNN generates less noisy attenuation maps and achieves better bone identification than MLAA. The average Dice similarity coefficient for bone regions between μ-CNN and μ-CT was 0.77, which was significantly higher than that between μ-MLAA and μ-CT (0.36). Also, the CNN result showed the best pixel-by-pixel correlation with the CT-based results and remarkably reduced differences in activity maps in comparison to CT-based attenuation correction. Conclusion: The proposed deep neural network produced a more reliable attenuation map for 511-keV photons than the 4-segment method currently used in whole-body PET/MRI studies. http://jnm.snmjournals.org/content/60/8/1183This article and updated information are available at: http://jnm.snmjournals.org/site/subscriptions/online.xhtml Information about subscriptions to JNM can be found at: http://jnm.snmjournals.org/site/misc/permission.xhtml
The objective of this study is to develop a convolutional neural network (CNN) for computed tomography (CT) image super-resolution. The network learns an end-to-end mapping between low (thick-slice thickness) and high (thin-slice thickness) resolution images using the modified U-Net. To verify the proposed method, we train and test the CNN using axially averaged data of existing thin-slice CT images as input and their middle slice as the label. Fifty-two CT studies are used as the CNN training set, and 13 CT studies are used as the test set. We perform five-fold cross-validation to confirm the performance consistency. Because all input and output images are used in two-dimensional slice format, the total number of slices for training the CNN is 7670. We assess the performance of the proposed method with respect to the resolution and contrast, as well as the noise properties. The CNN generates output images that are virtually equivalent to the ground truth. The most remarkable image-recovery improvement by the CNN is deblurring of boundaries of bone structures and air cavities. The CNN output yields an approximately 10% higher peak signal-to-noise ratio and lower normalized root mean square error than the input (thicker slices). The CNN output noise level is lower than the ground truth and equivalent to the iterative image reconstruction result. The proposed deep learning method is useful for both super-resolution and de-noising.
Visualization of biologic processes at molecular and cellular levels has revolutionized the understanding and treatment of human diseases. However, no single biomedical imaging modality provides complete information, resulting in the emergence of multimodal approaches. Combining state-of-the-art PET and MRI technologies without loss of system performance and overall image quality can provide opportunities for new scientific and clinical innovations. Here, we present a multiparametric PET/MR imager based on a smallanimal dedicated, high-performance, silicon photomultiplier (SiPM) PET system and a 7-T MR scanner. Methods: A SiPM-based PET insert that has the peak sensitivity of 3.4% and center volumetric resolution of 1.92/0.53 mm 3 (filtered backprojection/ordered-subset expectation maximization) was developed. The SiPM PET insert was placed between the mouse body transceiver coil and gradient coil of a 7-T small-animal MRI scanner for simultaneous PET/MRI. Mutual interference between the MRI and SiPM PET systems was evaluated using various MR pulse sequences. A cylindric corn oil phantom was scanned to assess the effects of the SiPM PET on the MR image acquisition. To assess the influence of MRI on the PET imaging functions, several PET performance indicators including scintillation pulse shape, flood image quality, energy spectrum, counting rate, and phantom image quality were evaluated with and without the application of MR pulse sequences. Simultaneous mouse PET/MRI studies were also performed to demonstrate the potential and usefulness of the multiparametric PET/MRI in preclinical applications. Results: Excellent performance and stability of the PET system were demonstrated, and the PET/MRI combination did not result in significant image quality degradation of either modality. Finally, simultaneous PET/MRI studies in mice demonstrated the feasibility of the developed system for evaluating the biochemical and cellular changes in a brain tumor model and facilitating the development of new multimodal imaging probes. Conclusion: We developed a multiparametric imager with high physical performance and good system stability and demonstrated its feasibility for small-animal experiments, suggesting its usefulness for investigating in vivo molecular interactions of metabolites, and cross-validation studies of both PET and MRI.Key Words: PET/MRI; silicon photomultiplier (SiPM); hybrid imaging; multi-parametric imaging; dual-modality imaging probe Asubst antial role of small-animal imaging has been pinpointed in numerous studies in terms of understanding the underlying mechanism of human diseases and elucidating the efficacy of new therapeutic approaches. Among the in vivo small-animal imaging modalities, which are scaled down to dedicated devices from clinical ones, PET is the most-sensitive technique that is readily translatable to the clinic (1). Spatial and temporal distributions of compounds labeled with a positron-emitting radionuclide are noninvasively measured by the PET scanner. Consequently, the PET scanner p...
The developed MR-compatible PET insert is designed for insertion into a narrow-bore magnetic resonance imaging scanner, and it provides excellent imaging performance for PET/MR preclinical studies.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.