Purpose Attenuation correction and scatter compensation (AC/SC) are two main steps toward quantitative PET imaging, which remain challenging in PET-only and PET/MRI systems. These can be effectively tackled via deep learning (DL) methods. However, trustworthy, and generalizable DL models commonly require well-curated, heterogeneous, and large datasets from multiple clinical centers. At the same time, owing to legal/ethical issues and privacy concerns, forming a large collective, centralized dataset poses significant challenges. In this work, we aimed to develop a DL-based model in a multicenter setting without direct sharing of data using federated learning (FL) for AC/SC of PET images. Methods Non-attenuation/scatter corrected and CT-based attenuation/scatter corrected (CT-ASC) 18F-FDG PET images of 300 patients were enrolled in this study. The dataset consisted of 6 different centers, each with 50 patients, with scanner, image acquisition, and reconstruction protocols varying across the centers. CT-based ASC PET images served as the standard reference. All images were reviewed to include high-quality and artifact-free PET images. Both corrected and uncorrected PET images were converted to standardized uptake values (SUVs). We used a modified nested U-Net utilizing residual U-block in a U-shape architecture. We evaluated two FL models, namely sequential (FL-SQ) and parallel (FL-PL) and compared their performance with the baseline centralized (CZ) learning model wherein the data were pooled to one server, as well as center-based (CB) models where for each center the model was built and evaluated separately. Data from each center were divided to contribute to training (30 patients), validation (10 patients), and test sets (10 patients). Final evaluations and reports were performed on 60 patients (10 patients from each center). Results In terms of percent SUV absolute relative error (ARE%), both FL-SQ (CI:12.21–14.81%) and FL-PL (CI:11.82–13.84%) models demonstrated excellent agreement with the centralized framework (CI:10.32–12.00%), while FL-based algorithms improved model performance by over 11% compared to CB training strategy (CI: 22.34–26.10%). Furthermore, the Mann–Whitney test between different strategies revealed no significant differences between CZ and FL-based algorithms (p-value > 0.05) in center-categorized mode. At the same time, a significant difference was observed between the different training approaches on the overall dataset (p-value < 0.05). In addition, voxel-wise comparison, with respect to reference CT-ASC, exhibited similar performance for images predicted by CZ (R2 = 0.94), FL-SQ (R2 = 0.93), and FL-PL (R2 = 0.92), while CB model achieved a far lower coefficient of determination (R2 = 0.74). Despite the strong correlations between CZ and FL-based methods compared to reference CT-ASC, a slight underestimation of predicted voxel values was observed. Conclusion Deep learning-based models provide promising results toward quantitative PET image reconstruction. Specifically, we developed two FL models and compared their performance with center-based and centralized models. The proposed FL-based models achieved higher performance compared to center-based models, comparable with centralized models. Our work provided strong empirical evidence that the FL framework can fully benefit from the generalizability and robustness of DL models used for AC/SC in PET, while obviating the need for the direct sharing of datasets between clinical imaging centers.
Objectives The current study aimed to design an ultra-low-dose CT examination protocol using a deep learning approach suitable for clinical diagnosis of COVID-19 patients. Methods In this study, 800, 170, and 171 pairs of ultra-low-dose and full-dose CT images were used as input/output as training, test, and external validation set, respectively, to implement the full-dose prediction technique. A residual convolutional neural network was applied to generate full-dose from ultra-low-dose CT images. The quality of predicted CT images was assessed using root mean square error (RMSE), structural similarity index (SSIM), and peak signal-to-noise ratio (PSNR). Scores ranging from 1 to 5 were assigned reflecting subjective assessment of image quality and related COVID-19 features, including ground glass opacities (GGO), crazy paving (CP), consolidation (CS), nodular infiltrates (NI), bronchovascular thickening (BVT), and pleural effusion (PE). Results The radiation dose in terms of CT dose index (CTDI vol ) was reduced by up to 89%. The RMSE decreased from 0.16 ± 0.05 to 0.09 ± 0.02 and from 0.16 ± 0.06 to 0.08 ± 0.02 for the predicted compared with ultra-low-dose CT images in the test and external validation set, respectively. The overall scoring assigned by radiologists showed an acceptance rate of 4.72 ± 0.57 out of 5 for reference full-dose CT images, while ultra-low-dose CT images rated 2.78 ± 0.9. The predicted CT images using the deep learning algorithm achieved a score of 4.42 ± 0.8. Conclusions The results demonstrated that the deep learning algorithm is capable of predicting standard full-dose CT images with acceptable quality for the clinical diagnosis of COVID-19 positive patients with substantial radiation dose reduction. Key Points • Ultra-low-dose CT imaging of COVID-19 patients would result in the loss of critical information about lesion types, which could potentially affect clinical diagnosis. • Deep learning–based prediction of full-dose from ultra-low-dose CT images for the diagnosis of COVID-19 could reduce the radiation dose by up to 89%. • Deep learning algorithms failed to recover the correct lesion structure/density for a number of patients considered outliers, and as such, further research and development is warranted to address these limitations. Electronic supplementary material The online version of this article (10.1007/s00330-020-07225-6) contains supplementary material, which is available to authorized users.
Purpose Tendency is to moderate the injected activity and/or reduce acquisition time in PET examinations to minimize potential radiation hazards and increase patient comfort. This work aims to assess the performance of regular full-dose (FD) synthesis from fast/low-dose (LD) whole-body (WB) PET images using deep learning techniques. Methods Instead of using synthetic LD scans, two separate clinical WB 18F-Fluorodeoxyglucose (18F-FDG) PET/CT studies of 100 patients were acquired: one regular FD (~ 27 min) and one fast or LD (~ 3 min) consisting of 1/8th of the standard acquisition time. A modified cycle-consistent generative adversarial network (CycleGAN) and residual neural network (ResNET) models, denoted as CGAN and RNET, respectively, were implemented to predict FD PET images. The quality of the predicted PET images was assessed by two nuclear medicine physicians. Moreover, the diagnostic quality of the predicted PET images was evaluated using a pass/fail scheme for lesion detectability task. Quantitative analysis using established metrics including standardized uptake value (SUV) bias was performed for the liver, left/right lung, brain, and 400 malignant lesions from the test and evaluation datasets. Results CGAN scored 4.92 and 3.88 (out of 5) (adequate to good) for brain and neck + trunk, respectively. The average SUV bias calculated over normal tissues was 3.39 ± 0.71% and − 3.83 ± 1.25% for CGAN and RNET, respectively. Bland-Altman analysis reported the lowest SUV bias (0.01%) and 95% confidence interval of − 0.36, + 0.47 for CGAN compared with the reference FD images for malignant lesions. Conclusion CycleGAN is able to synthesize clinical FD WB PET images from LD images with 1/8th of standard injected activity or acquisition time. The predicted FD images present almost similar performance in terms of lesion detectability, qualitative scores, and quantification bias and variance.
This review sets out to discuss the foremost applications of artificial intelligence (AI), particularly deep learning (DL) algorithms, in single-photon emission computed tomography (SPECT) and positron emission tomography (PET) imaging. To this end, the underlying limitations/challenges of these imaging modalities are briefly discussed followed by a description of AI-based solutions proposed to address these challenges. This review will focus on mainstream generic fields, including instrumentation, image acquisition/formation, image reconstruction and low-dose/fast scanning, quantitative imaging, image interpretation (computer-aided detection/ diagnosis/prognosis), as well as internal radiation dosimetry. A brief description of deep learning algorithms and the fundamental architectures used for these applications is also provided. Finally, the challenges, opportunities, and barriers to full-scale validation and adoption of AI-based solutions for improvement of image quality and quantitative accuracy of PET and SPECT images in the clinic are discussed.
To assess the performance of full dose (FD) positron emission tomography (PET) image synthesis in both image and projection space from low-dose (LD) PET images/sinograms without sacrificing diagnostic quality using deep learning techniques.Methods: Clinical brain PET/CT studies of 140 patients were retrospectively employed for LD to FD PET conversion. 5% of the events were randomly selected from the FD list-mode PET data to simulate a realistic LD acquisition. A modified 3D U-Net model was implemented to predict FD sinograms in the projectionspace (PSS) and FD images in image-space (PIS) from their corresponding LD sinograms/images, respectively. The quality of the predicted PET images was assessed by two nuclear medicine specialists using a five-point grading scheme. Quantitative analysis using established metrics including the peak signalto-noise ratio (PSNR), structural similarity index metric (SSIM), region-wise standardized uptake value (SUV) bias, as well as first-, second-and high-order texture radiomic features in 83 brain regions for the test and evaluation dataset was also performed.Results: All PSS images were scored 4 or higher (good to excellent) by the nuclear medicine specialists.PSNR and SSIM values of 0.96 ± 0.03, 0.97 ± 0.02 and 31.70 ± 0.75, 37.30 ± 0.71 were obtained for PIS and PSS, respectively. The average SUV bias calculated over all brain regions was 0.24 ± 0.96% and 1.05 ± 1.44% for PSS and PIS, respectively. The Bland-Altman plots reported the lowest SUV bias (0.02) and variance (95% CI: -0.92, +0.84) for PSS compared with the reference FD images. The relative error of the homogeneity radiomic feature belonging to the Grey Level Co-occurrence Matrix category was -1.07 ± 1.77 and 0.28 ± 1.4 for PIS and PSS, respectively Conclusion:The qualitative assessment and quantitative analysis demonstrated that the FD PET prediction in projection space led to superior performance, resulting in higher image quality and lower SUV bias and variance compared to FD PET prediction in the image domain.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.