Data-driven respiratory gating techniques were developed to correct for respiratory motion in PET studies, without the help of external motion tracking systems. Due to the greatly increased image noise in gated reconstructions, it is desirable to develop a data-driven event-by-event respiratory motion correction method. In this study, using the Centroid-Of-Distribution (COD) algorithm, we established a data-driven event-by-event respiratory motion correction technique using TOF PET list-mode data, and investigated its performance by comparing with an external system-based correction method. Ten human scans with the pancreatic β-cell tracer 18F-FP-(+)-DTBZ were employed. Data-driven respiratory motions in Superior-Inferior (SI) and Anterior-Posterior (AP) directions were first determined by computing the centroid of all radioactive events during each short time frame with further processing. The Anzai belt system was employed to record respiratory motion in all studies. COD traces in both SI and AP directions were first compared with Anzai traces by computing the Pearson correlation coefficients. Then, respiratory gated reconstructions based on either COD or Anzai traces were performed to evaluate their relative performance in capturing respiratory motion. Finally, based on correlations of displacements of organ locations in all directions and COD information, continuous 3D internal organ motion in SI and AP directions was calculated based on COD traces to guide event-by-event respiratory motion correction in the MOLAR reconstruction framework. Continuous respiratory correction results based on COD were compared with that based on Anzai, and without motion correction. Data-driven COD traces showed a good correlation with Anzai in both SI and AP directions for the majority of studies, with correlation coefficients ranging from 63% to 89%. Based on the determined respiratory displacements of pancreas between end-expiration and end-inspiration from gated reconstructions, there was no significant difference between COD-based and Anzai-based methods. Finally, data-driven COD-based event-by-event respiratory motion correction yielded comparable results to that based on Anzai respiratory traces, in terms of contrast recovery and reduced motion-induced blur. Data-driven event-by-event respiratory motion correction using COD showed significant image quality improvement compared with reconstructions with no motion correction, and gave comparable results to the Anzai-based method.
The results of these studies provide useful guidance in selecting the proper reflectors and crystal surface treatments when LSO arrays are used for high-resolution PET applications in small animal scanners or dedicated breast and brain scanners.
Respiratory motion degrades the detection and quantification capabilities of PET/CT imaging. Moreover, mismatch between a fast helical CT image and a time-averaged PET image due to respiratory motion results in additional attenuation correction artifacts and inaccurate localization. Current motion compensation approaches typically have 3 limitations: the mismatch among respiration-gated PET images and the CT attenuation correction (CTAC) map can introduce artifacts in the gated PET reconstructions that can subsequently affect the accuracy of the motion estimation; sinogram-based correction approaches do not correct for intragate motion due to intracycle and intercycle breathing variations; and the mismatch between the PET motion compensation reference gate and the CT image can cause an additional CT-mismatch artifact. In this study, we established a motion correction framework to address these limitations. In the proposed framework, the combined emission-transmission reconstruction algorithm was used for phase-matched gated PET reconstructions to facilitate the motion model building. An event-by-event nonrigid respiratory motion compensation method with correlations between internal organ motion and external respiratory signals was used to correct both intracycle and intercycle breathing variations. The PET reference gate was automatically determined by a newly proposed CT-matching algorithm. We applied the new framework to 13 human datasets with 3 different radiotracers and 323 lesions and compared its performance with CTAC and non-attenuation correction (NAC) approaches. Validation using 4-dimensional CT was performed for one lung cancer dataset. For the 10 F-FDG studies, the proposed method outperformed ( < 0.006) both the CTAC and the NAC methods in terms of region-of-interest-based SUV, SUV, and SUV ratio improvements over no motion correction (SUV: 19.9% vs. 14.0% vs. 13.2%; SUV: 15.5% vs. 10.8% vs. 10.6%; SUV ratio: 24.1% vs. 17.6% vs. 16.2%, for the proposed, CTAC, and NAC methods, respectively). The proposed method increased SUV ratios over no motion correction for 94.4% of lesions, compared with 84.8% and 86.4% using the CTAC and NAC methods, respectively. For the 2 F-fluoropropyl-(+)-dihydrotetrabenazine studies, the proposed method reduced the CT-mismatch artifacts in the lower lung where the CTAC approach failed and maintained the quantification accuracy of bone marrow where the NAC approach failed. For theF-FMISO study, the proposed method outperformed both the CTAC and the NAC methods in terms of motion estimation accuracy at 2 lung lesion locations. The proposed PET/CT respiratory event-by-event motion-correction framework with motion information derived from matched attenuation-corrected PET data provides image quality superior to that of the CTAC and NAC methods for multiple tracers.
PET has the potential to perform absolute in vivo radiotracer quantitation. This potential can be compromised by voluntary body motion (BM), which degrades image resolution, alters apparent tracer uptakes, introduces CT-based attenuation correction mismatch artifacts and causes inaccurate parameter estimates in dynamic studies. Existing body motion correction (BMC) methods include frame-based image-registration (FIR) approaches and real-time motion tracking using external measurement devices. FIR does not correct for motion occurring within a pre-defined frame and the device-based method is generally not practical in routine clinical use, since it requires attaching a tracking device to the patient and additional device set up time. In this paper, we proposed a data-driven algorithm, centroid of distribution (COD), to detect BM. In this algorithm, the central coordinate of the time-of-flight (TOF) bin, which can be used as a reasonable surrogate for the annihilation point, is calculated for every event, and averaged over a certain time interval to generate a COD trace. We hypothesized that abrupt changes on the COD trace in lateral direction represent BMs. After detection, BM is estimated using non-rigid image registrations and corrected through list-mode reconstruction. The COD-based BMC approach was validated using a monkey study and was evaluated against FIR using four human and one dog studies with multiple tracers. The proposed approach successfully detected BMs and yielded superior correction results over conventional FIR approaches.
Head motion degrades image quality and causes erroneous parameter estimates in tracer kinetic modeling in brain PET studies. Existing motion correction methods include frame-based image registration (FIR) and correction using real-time hardware-based motion tracking (HMT) information. However, FIR cannot correct for motion within 1 predefined scan period, and HMT is not readily available in the clinic since it typically requires attaching a tracking device to the patient. In this study, we propose a motion correction framework with a data-driven algorithm, that is, using the PET raw data itself, to address these limitations. Methods: We propose a data-driven algorithm, centroid of distribution (COD), to detect head motion. In COD, the central coordinates of the line of response of all events are averaged over 1-s intervals to generate a COD trace. A point-to-point change in the COD trace in 1 direction that exceeded a user-defined threshold was defined as a time point of head motion, which was followed by manually adding additional motion time points. All the frames defined by such time points were reconstructed without attenuation correction and rigidly registered to a reference frame. The resulting transformation matrices were then used to perform the final motion-compensated reconstruction. We applied the new COD framework to 23 human dynamic datasets, all containing large head motion, with 18 F-FDG (n 5 13) and 11 C-UCB-J ((R)-1-((3-( 11 C-methyl-11 C)pyridin-4-yl)methyl)-4-(3,4,5-trifluorophenyl)pyrrolidin-2-one) (n 5 10) and compared its performance with FIR and with HMT using Vicra (an optical HMT device), which can be considered the gold standard. Results: The COD method yielded a 1.0% ± 3.2% (mean ± SD across all subjects and 12 gray matter regions) SUV difference for 18 F-FDG (3.7% ± 5.4% for 11 C-UCB-J) compared with HMT, whereas no motion correction (NMC) and FIR yielded −15.7% ± 12.2% (−20.5% ± 15.8%) and −4.7% ± 6.9% (−6.2% ± 11.0%), respectively. For 18 F-FDG dynamic studies, COD yielded differences of 3.6% ± 10.9% in K i value as compared with HMT, whereas NMC and FIR yielded −18.0% ± 39.2% and −2.6% ± 19.8%, respectively. For 11 C-UCB-J, COD yielded 3.7% ± 5.2% differences in V T compared with HMT, whereas NMC and FIR yielded −20.0% ± 12.5% and −5.3% ± 9.4%, respectively. Conclusion: The proposed COD-based data-driven motion correction method outperformed FIR and achieved comparable or even better performance than the Vicra HMT method in both static and dynamic studies.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.