Several research groups are studying organdedicated limited angle positron emission tomography (PET) systems to optimize performance-cost ratio, sensitivity, access to the patient and/or flexibility. Often open systems are considered, typically consisting of two detector panels of various sizes. Such systems provide incomplete sampling due to limited angular coverage and/or truncation, which leads to artefacts in the reconstructed activity images. In addition, these organ-dedicated PET systems are usually stand-alone systems, and as a result, no attenuation information can be obtained from anatomical images acquired in the same imaging session. It has been shown that the use of time-of-flight information reduces incomplete data artefacts and enables the joint estimation of the activity and the attenuation factors. In this work, we explore with simple 2D simulations the performance and stability of a joint reconstruction algorithm, for imaging with a limited angle PET system. The reconstruction is based on the so-called MLACF (Maximum Likelihood Attenuation Correction Factors) algorithm and uses linear attenuation coefficients in a known-tissueclass region to obtain absolute quantification. Different panel sizes and different time-of-flight (TOF) resolutions are considered. The noise propagation is compared to that of MLEM reconstruction with exact attenuation correction (AC) for the same PET system. The results show that with good TOF resolution, images of good visual quality can be obtained. If also a good scatter correction can be implemented, quantitative PET imaging will be possible. Further research, in particular on scatter correction, is required.
Synthetic computed tomography (CT) images derived from magnetic resonance images (MRI) are of interest for radiotherapy planning and positron emission tomography (PET) attenuation correction. In recent years, deep learning implementations have demonstrated improvement over atlasbased and segmentation-based methods. Nevertheless, several open questions remain to be addressed, such as which are the best MRI sequence and neural network architecture. In this work, we compared the performance of different combinations of two common MRI sequences (T1-and T2-weighted), and three state-of-the-art neural networks designed for medical image processing (Vnet, HighRes3dNet and ScaleNet). The experiments were conducted on brain datasets from a public database. Our results suggest that T1 images performs better than T2, but the results further improve when combining both sequences. The lowest mean average error over the entire head (MAE = 95.37 ± 11.70 HU) was achieved combining T1 and T2 scans with ScaleNet. All tested deep learning models achieved significantly lower MAE (p < 0.05) than a well-known atlasbased method.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.