Parametric imaging has been shown to provide better quantitation physiologically compared with SUV imaging in PET. With the increased sensitivity from a recently developed total-body PET scanner, whole-bodyscans with higher temporal resolution become possible for dynamic analysis and parametric imaging. In this paper, we focus on deriving the parameter k 1 using compartmental modeling, and on developing a method to acquire whole-body FDG-PET parametric images using only the first 90 seconds of the post-injection scan data with the total-body PET system. Dynamic projections were acquired with a time interval of 1 second for the first 30 seconds and 2 seconds for the following minute. Image-derived input functions were acquired from the reconstructed dynamic sequences in the ascending aorta. The one-tissue compartment model with the total of 4 parameters (k 1 , k 2 , blood fraction, delay time) was used. A maximum-likelihood based estimation method was developed with the 1-tissue compartment model solution. The accuracy of the acquired parameters was compared with the ones estimated using a 2-tissue irreversible model with 1-hour long data.All four parametric images were successfully calculated using data from two volunteers. By comparing the timeactivity-curves acquired from the volume of interests, it was shown that the parameters estimated using our method were able to predict the time-activity curves of the early dynamics of FDG in different organs. The time delay effects for different organs were also clearly visible in the reconstructed time delay image with delay variations as large as 40 seconds. The estimated parameters using both 90 seconds data and 1-hour long data were in good agreement for k 1 and blood fraction, while a large difference of k 2 was found between the 90 seconds and 1-hour data, suggesting k 2 can't be reliably estimated from the 90 second scan.We have shown that with the use of total-body PET and the increased sensitivity, it is possible to estimate parametric images based on the very early dynamics following FDG injection. The estimated k 1 could potentially be used clinically as an indicator for identifying abnormalities.
This paper presents a novel image-based visual servoing (IBVS) controller based on quasi-min-max model predictive control (MPC). By transforming the image Jacobian matrix (i.e. interaction matrix) into a convex combination of linear time-invariant vertices form with the tensor-product (TP) model transformation method, the visual servoing system is represented as a polytopic linear parameter-varying (LPV) system. A robust controller is designed for the robotic visual servoing system subject to input and output constraints such as robot physical limitations and visibility constraints. The control signal is calculated online by carrying out the convex optimization involving linear matrix inequalities (LMIs) in model predictive control. The proposed visual servoing method avoids the inverse of the image Jacobian matrix and hence can solve the intractable problems for the classical IBVS controller, such as large displacements between the initial and the desired position of the camera. The ability of handling constraints can keep the image features in the boundary of the desired field of view (FOV). To verify the effectiveness of the proposed algorithm, the simulation results on a 6 degrees-of-freedom (DOF) robot manipulator with eye-in-hand configuration are presented and discussed. 402 416 Fig. 13. Simulation results by Quasi-min-max MPC-based IBVS with noise (Z = 0.06).
Parametric imaging of Ki (the net influx rate) in FDG PET has been shown to provide better quantification and improved specificity for cancer detection compared with SUV imaging. Current methods for generating parametric images usually requires a long dynamic scan time. With the recently developed uEXPLORER scanner, a dramatic increase of sensitivity has reduced the noise in dynamic imaging, making it more robust to employ a non-linear estimation method and flexible protocols. In this work, we explored 2 new possible protocols besides the standard 60-minute one for the possibility of reducing scan time for Ki imaging.
Parametric imaging using the Patlak model has been shown to provide improved lesion detectability and specificity. The Patlak model requires both tissue time-activity curves (TACs) after equilibrium and knowledge of the input function from the start of injection. Therefore, the conventional dynamic scanning protocol typically starts from the radiotracer injection all the way to equilibrium. In this paper, we propose the use of hybrid population-based and model-based input function estimation and evaluate its use for whole-body Patlak analysis, in order to reduce the total scan time and simplify clinical Patlak parametric imaging protocols. Possible quantitative errors caused by the simplified scanning protocol were also analyzed both theoretically and with the use of clinical data. Materials and methods: Clinical data from 24 patients referred for tumor staging were included in this study. The patients underwent a whole-body dynamic PET study, 20 min after FDG injection (0.13 mCi/kg). The proposed whole-body scanning protocol includes 6 passes with 4-5 bed positions, depending on the size of the patient, with 2 min for each bed position. An input function from the literature was selected as the shape of the population-based input function. The descending aorta from the corresponding CT image was segmented and applied on the reconstructed dynamic PET images to acquire an image-based input function, which was later fitted using an exponential model. Due to the late scan time, only the later portion of the input function was available, which was used to scale the population-based input function. The hybrid input function was used to derive the wholebody Patlak images. Assuming a given error in the population-based input function, its influence on the final Patlak images were also derived theoretically and verified using the clinical data sets. Finally, the image quality of the reconstructed Patlak slope image was evaluated by an experienced radiologist in four different aspects: image artifacts, image noise, lesion sharpness, and lesion detectability. Results: It was found that errors in the population-based input function only affect the absolute scale of the Patlak slope image. The induced error is proportional to the percentage area-under-curve (AUC) error in the input function. These findings were also confirmed by numerical analysis. The predicted global scale was in good agreement with results from both image-based Patlak and direct Patlak approach. The fractions of the AUC from the early portion population-based input function were also found to be around 18% of the total AUC of the input function, further limiting the propagation of quantitation error from population-based input function to the final Patlak slope image. The reconstructed Patlak images were also found by the radiologist to provide excellent confidence in lesion detection tasks. Conclusions: We have proposed a simplified whole-body scanning protocol that utilizes both population-based input function and model-based input function. The error fr...
Conventional positron emission tomography (PET) image reconstruction is achieved by the statistical iterative method. Deep learning provides another opportunity for speeding up the image reconstruction process. However, conventional deep learning-based image reconstruction requires a fully connected network for learning the Radon transform. The use of fully connected networks greatly complicated the network and increased hardware cost. In this study, we proposed a novel deep learning-based image reconstruction method by utilizing the DIRECT data partitioning method. The U-net structure with only convolutional layers was used in our approach. Patch-based model training and testing were used to achieve 3D reconstructions within current hardware limitations. Time-of-flight (TOF)-histoimages were first generated from the listmode data to replace conventional sinograms. Different projection angles were used as different channels in the input. A total of 15 patient data were used in this study. For each patient, the dynamic whole-body scanning protocol was used to expand the training dataset and a total of 372 separate scans were included. The leave-one-patient-out validation method was used. Two separate studies were carried out. In the first study, the measured TOF-histoimages were directly used for model training and testing, to study the performance of the method in real-world applications. In the second study, TOF-histoimages were simulated from already reconstructed images to exclude the scatters, randoms, attenuation-activity mismatch effects. This study was used to evaluate the optimal performance when all other corrections are ideal. Volumes of interests were placed in the liver and lesion region to study image noise and lesion quantitations. The reconstructed images using the proposed deep learning method showed similar image quality when compared with the conventional expectation-maximization approach. A minimal difference was observed when the simulated TOF-histoimages were used as model input and testing, suggesting the deep learning model can indeed learn the reconstruction process. Some quantitative difference was observed when the measured TOF-histoimages were used. The two studies suggested that the major difference is caused by inaccurate corrections performed by the network itself, which indicated that physics-based corrections are still required for better quantitative performance. In conclusion, we have proposed a novel deep learning-based image reconstruction method for TOF PET. With the help of the DIRECT data partitioning method, no fully connected layers were used and 3D image reconstruction can be directly achieved within the limits of the current hardware.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.