Reducing radiation dose is important for PET imaging. However, reducing injection doses causes increased image noise and low signal-to-noise ratio (SNR), subsequently affecting diagnostic and quantitative accuracies. Deep learning methods have shown a great potential to reduce the noise and improve the SNR in low dose PET data.In this work, we comprehensively investigated the quantitative accuracy of small lung nodules, in addition to visual image quality, using deep learning based denoising methods for oncological PET imaging. We applied and optimized an advanced deep learning method based on the U-net architecture to predict the standard dose PET image from 10% low-dose PET data. We also investigated the effect of different network architectures, image dimensions, labels and inputs for deep learning methods with respect to both noise reduction performance and quantitative accuracy. Normalized mean square error (NMSE), SNR, and standard uptake value (SUV) bias of different nodule regions of interest (ROIs) were used for evaluation.Our results showed that U-net and GAN are superior to CAE with smaller SUV mean and SUV max bias at the expense of inferior SNR. A fully 3D U-net has optimal quantitative performance compared to 2D and 2.5D U-net with less than 15% SUV mean bias for all the ten patients. U-net outperforms Residual U-net (r-U-net) in general with smaller NMSE, higher SNR and lower SUV max bias. Fully 3D U-net is superior to several existing denoising methods, including Gaussian filter, anatomical-guided non-local mean (NLM) filter, and MAP reconstruction with Quadratic prior and relative difference prior, in terms of superior image quality and trade-off between noise and bias. Furthermore, incorporating aligned CT images has the potential to further improve the quantitative accuracy in multi-channel U-net.We found the optimal architectures and parameters of deep learning based methods are different for absolute quantitative accuracy and visual image quality. Our quantitative results demonstrated that fully 3D U-net can both effectively reduce image noise and control bias even for sub-centimeter small lung nodules when generating standard dose PET using 10% low count down-sampled data.
Purpose Attenuation correction using CT transmission scanning increases the accuracy of single-photon emission computed tomography (SPECT) and enables quantitative analysis. Current existing SPECT-only systems normally do not support transmission scanning and therefore scans on these systems are susceptible to attenuation artifacts. Moreover, the use of CT scans also increases radiation dose to patients and significant artifacts can occur due to the misregistration between the SPECT and CT scans as a result of patient motion. The purpose of this study is to develop an approach to estimate attenuation maps directly from SPECT emission data using deep learning methods. Methods Both photopeak window and scatter window SPECT images were used as inputs to better utilize the underlying attenuation information embedded in the emission data. The CT-based attenuation maps were used as labels with which cardiac SPECT/CT images of 65 patients were included for training and testing. We implemented and evaluated deep fully convolutional neural networks using both standard training and training using an adversarial strategy. Results The synthetic attenuation maps were qualitatively and quantitatively consistent with the CT-based attenuation map. The globally normalized mean absolute error (NMAE) between the synthetic and CT-based attenuation maps were 3.60% ± 0.85% among the 25 testing subjects. The SPECT reconstructed images corrected using the CT-based attenuation map and synthetic attenuation map are highly consistent. The NMAE between the reconstructed SPECT images that were corrected using the synthetic and CT-based attenuation maps was 0.26% ± 0.15%, whereas the localized absolute percentage error was 1.33% ± 3.80% in the left ventricle (LV) myocardium and 1.07% ± 2.58% in the LV blood pool. Conclusion We developed a deep convolutional neural network to estimate attenuation maps for SPECT directly from the emission data. The proposed method is capable of generating highly reliable attenuation maps to facilitate attenuation correction for SPECT-only scanners for myocardial perfusion imaging.
Respiratory motion degrades the detection and quantification capabilities of PET/CT imaging. Moreover, mismatch between a fast helical CT image and a time-averaged PET image due to respiratory motion results in additional attenuation correction artifacts and inaccurate localization. Current motion compensation approaches typically have 3 limitations: the mismatch among respiration-gated PET images and the CT attenuation correction (CTAC) map can introduce artifacts in the gated PET reconstructions that can subsequently affect the accuracy of the motion estimation; sinogram-based correction approaches do not correct for intragate motion due to intracycle and intercycle breathing variations; and the mismatch between the PET motion compensation reference gate and the CT image can cause an additional CT-mismatch artifact. In this study, we established a motion correction framework to address these limitations. In the proposed framework, the combined emission-transmission reconstruction algorithm was used for phase-matched gated PET reconstructions to facilitate the motion model building. An event-by-event nonrigid respiratory motion compensation method with correlations between internal organ motion and external respiratory signals was used to correct both intracycle and intercycle breathing variations. The PET reference gate was automatically determined by a newly proposed CT-matching algorithm. We applied the new framework to 13 human datasets with 3 different radiotracers and 323 lesions and compared its performance with CTAC and non-attenuation correction (NAC) approaches. Validation using 4-dimensional CT was performed for one lung cancer dataset. For the 10 F-FDG studies, the proposed method outperformed ( < 0.006) both the CTAC and the NAC methods in terms of region-of-interest-based SUV, SUV, and SUV ratio improvements over no motion correction (SUV: 19.9% vs. 14.0% vs. 13.2%; SUV: 15.5% vs. 10.8% vs. 10.6%; SUV ratio: 24.1% vs. 17.6% vs. 16.2%, for the proposed, CTAC, and NAC methods, respectively). The proposed method increased SUV ratios over no motion correction for 94.4% of lesions, compared with 84.8% and 86.4% using the CTAC and NAC methods, respectively. For the 2 F-fluoropropyl-(+)-dihydrotetrabenazine studies, the proposed method reduced the CT-mismatch artifacts in the lower lung where the CTAC approach failed and maintained the quantification accuracy of bone marrow where the NAC approach failed. For theF-FMISO study, the proposed method outperformed both the CTAC and the NAC methods in terms of motion estimation accuracy at 2 lung lesion locations. The proposed PET/CT respiratory event-by-event motion-correction framework with motion information derived from matched attenuation-corrected PET data provides image quality superior to that of the CTAC and NAC methods for multiple tracers.
PET has the potential to perform absolute in vivo radiotracer quantitation. This potential can be compromised by voluntary body motion (BM), which degrades image resolution, alters apparent tracer uptakes, introduces CT-based attenuation correction mismatch artifacts and causes inaccurate parameter estimates in dynamic studies. Existing body motion correction (BMC) methods include frame-based image-registration (FIR) approaches and real-time motion tracking using external measurement devices. FIR does not correct for motion occurring within a pre-defined frame and the device-based method is generally not practical in routine clinical use, since it requires attaching a tracking device to the patient and additional device set up time. In this paper, we proposed a data-driven algorithm, centroid of distribution (COD), to detect BM. In this algorithm, the central coordinate of the time-of-flight (TOF) bin, which can be used as a reasonable surrogate for the annihilation point, is calculated for every event, and averaged over a certain time interval to generate a COD trace. We hypothesized that abrupt changes on the COD trace in lateral direction represent BMs. After detection, BM is estimated using non-rigid image registrations and corrected through list-mode reconstruction. The COD-based BMC approach was validated using a monkey study and was evaluated against FIR using four human and one dog studies with multiple tracers. The proposed approach successfully detected BMs and yielded superior correction results over conventional FIR approaches.
Introduction Twin-to-twin transfusion syndrome (TTTS) is a potentially lethal condition that affects pregnancies in which twins share a single placenta. The definitive treatment for TTTS is fetoscopic laser photocoagulation, a procedure in which placental blood vessels are selectively cauterized. Challenges in this procedure include difficulty in quickly identifying placental blood vessels due to the many artifacts in the endoscopic video that the surgeon uses for navigation. We propose using deep-learned segmentations of blood vessels to create masks that can be recombined with the original fetoscopic video frame in such a way that the location of placental blood vessels is discernable at a glance. Methods In a process approved by an institutional review board, intraoperative videos were acquired from ten fetoscopic laser photocoagulation surgeries performed at Yale New Haven Hospital. A total of 345 video frames were selected from these videos at regularly spaced time intervals. The video frames were segmented once by an expert human rater (a clinician) and once by a novice, but trained human rater (an undergraduate student). The segmentations were used to train a fully convolutional neural network of 25 layers. Results The neural network was able to produce segmentations with a high similarity to ground truth segmentations produced by an expert human rater (sensitivity=92.15%±10.69%) and produced segmentations that were significantly more accurate than those produced by a novice human rater (sensitivity=56.87%±21.64%; p < 0.01). Conclusion A convolutional neural network can be trained to segment placental blood vessels with near-human accuracy and can exceed the accuracy of novice human raters. Recombining these segmentations with the original fetoscopic video frames can produced enhanced frames in which blood vessels are easily detectable. This has significant implications for aiding fetoscopic surgeons—especially trainees who are not yet at an expert level.
Respiratory motion during positron emission tomography (PET)/computed tomography (CT) imaging can cause significant image blurring and underestimation of tracer concentration for both static and dynamic studies. In this paper, with the aim to eliminate both intra-cycle and inter-cycle motions, and apply to dynamic imaging, we developed a non-rigid event-by-event (NR-EBE) respiratory motion-compensated list-mode reconstruction algorithm. The proposed method consists of two components: the first component estimates a continuous non-rigid motion field of the internal organs using the internal-external motion correlation. This continuous motion field is then incorporated into the second component, non-rigid MOLAR (NR-MOLAR) reconstruction algorithm to deform the system matrix to the reference location where the attenuation CT is acquired. The point spread function (PSF) and time-of-flight (TOF) kernels in NR-MOLAR are incorporated in the system matrix calculation, and therefore are also deformed according to motion. We first validated NR-MOLAR using a XCAT phantom with a simulated respiratory motion. NR-EBE motion-compensated image reconstruction using both the components was then validated on three human studies injected with F-FPDTBZ and one withF-fluorodeoxyglucose (FDG) tracers. The human results were compared with conventional non-rigid motion correction using discrete motion field (NR-discrete, one motion field per gate) and a previously proposed rigid EBE motion-compensated image reconstruction (R-EBE) that was designed to correct for rigid motion on a target lesion/organ. The XCAT results demonstrated that NR-MOLAR incorporating both PSF and TOF kernels effectively corrected for non-rigid motion. The F-FPDTBZ studies showed that NR-EBE out-performed NR-Discrete, and yielded comparable results with R-EBE on target organs while yielding superior image quality in other regions. The FDG study showed that NR-EBE clearly improved the visibility of multiple moving lesions in the liver where some of them could not be discerned in other reconstructions, in addition to improving quantification. These results show that NR-EBE motion-compensated image reconstruction appears to be a promising tool for lesion detection and quantification when imaging thoracic and abdominal regions using PET.
In PET/CT imaging, CT is used for PET attenuation correction (AC). Mismatch between CT and PET due to patient body motion results in AC artifacts. In addition, artifact caused by metal, beam-hardening and count-starving in CT itself also introduces inaccurate AC for PET. Maximum likelihood reconstruction of activity and attenuation (MLAA) was proposed to solve those issues by simultaneously reconstructing tracer activity (λ-MLAA) and attenuation map (μ-MLAA) based on the PET raw data only. However, μ-MLAA suffers from high noise and λ-MLAA suffers from large bias as compared to the reconstruction using the CT-based attenuation map (μ-CT). Recently, a convolutional neural network (CNN) was applied to predict the CT attenuation map (μ-CNN) from λ-MLAA and μ-MLAA, in which an image-domain loss (IM-loss) function between the μ-CNN and the ground truth μ-CT was used. However, IM-loss does not directly measure the AC errors according to the PET attenuation physics, where the line-integral projection of the attenuation map (μ) along the path of the two annihilation events, instead of the μ itself, is used for AC. Therefore, a network trained with the IM-loss may yield suboptimal performance in the μ generation. Here, we propose a novel line-integral projection loss (LIP-loss) function that incorporates the PET attenuation physics for μ generation. Eighty training and twenty testing datasets of whole-body 18 F-FDG PET and paired ground truth μ-CT were used. Quantitative evaluations showed that the model trained with the additional LIP-loss was able to significantly outperform the model trained solely based on the IM-loss function.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.