Low-dose computed tomography (LDCT) images are often severely degraded by amplified mottle noise and streak artifacts. These artifacts are often hard to suppress without introducing tissue blurring effects. In this paper, we propose to process LDCT images using a novel image-domain algorithm called "artifact suppressed dictionary learning (ASDL)." In this ASDL method, orientation and scale information on artifacts is exploited to train artifact atoms, which are then combined with tissue feature atoms to build three discriminative dictionaries. The streak artifacts are cancelled via a discriminative sparse representation operation based on these dictionaries. Then, a general dictionary learning processing is applied to further reduce the noise and residual artifacts. Qualitative and quantitative evaluations on a large set of abdominal and mediastinum CT images are carried out and the results show that the proposed method can be efficiently applied in most current CT systems.
In abdomen computed tomography (CT), repeated radiation exposures are often inevitable for cancer patients who receive surgery or radiotherapy guided by CT images. Low-dose scans should thus be considered in order to avoid the harm of accumulative x-ray radiation. This work is aimed at improving abdomen tumor CT images from low-dose scans by using a fast dictionary learning (DL) based processing. Stemming from sparse representation theory, the proposed patch-based DL approach allows effective suppression of both mottled noise and streak artifacts. The experiments carried out on clinical data show that the proposed method brings encouraging improvements in abdomen low-dose CT images with tumors.
Reducing radiation dose is important for PET imaging. However, reducing injection doses causes increased image noise and low signal-to-noise ratio (SNR), subsequently affecting diagnostic and quantitative accuracies. Deep learning methods have shown a great potential to reduce the noise and improve the SNR in low dose PET data.In this work, we comprehensively investigated the quantitative accuracy of small lung nodules, in addition to visual image quality, using deep learning based denoising methods for oncological PET imaging. We applied and optimized an advanced deep learning method based on the U-net architecture to predict the standard dose PET image from 10% low-dose PET data. We also investigated the effect of different network architectures, image dimensions, labels and inputs for deep learning methods with respect to both noise reduction performance and quantitative accuracy. Normalized mean square error (NMSE), SNR, and standard uptake value (SUV) bias of different nodule regions of interest (ROIs) were used for evaluation.Our results showed that U-net and GAN are superior to CAE with smaller SUV mean and SUV max bias at the expense of inferior SNR. A fully 3D U-net has optimal quantitative performance compared to 2D and 2.5D U-net with less than 15% SUV mean bias for all the ten patients. U-net outperforms Residual U-net (r-U-net) in general with smaller NMSE, higher SNR and lower SUV max bias. Fully 3D U-net is superior to several existing denoising methods, including Gaussian filter, anatomical-guided non-local mean (NLM) filter, and MAP reconstruction with Quadratic prior and relative difference prior, in terms of superior image quality and trade-off between noise and bias. Furthermore, incorporating aligned CT images has the potential to further improve the quantitative accuracy in multi-channel U-net.We found the optimal architectures and parameters of deep learning based methods are different for absolute quantitative accuracy and visual image quality. Our quantitative results demonstrated that fully 3D U-net can both effectively reduce image noise and control bias even for sub-centimeter small lung nodules when generating standard dose PET using 10% low count down-sampled data.
Purpose Attenuation correction using CT transmission scanning increases the accuracy of single-photon emission computed tomography (SPECT) and enables quantitative analysis. Current existing SPECT-only systems normally do not support transmission scanning and therefore scans on these systems are susceptible to attenuation artifacts. Moreover, the use of CT scans also increases radiation dose to patients and significant artifacts can occur due to the misregistration between the SPECT and CT scans as a result of patient motion. The purpose of this study is to develop an approach to estimate attenuation maps directly from SPECT emission data using deep learning methods. Methods Both photopeak window and scatter window SPECT images were used as inputs to better utilize the underlying attenuation information embedded in the emission data. The CT-based attenuation maps were used as labels with which cardiac SPECT/CT images of 65 patients were included for training and testing. We implemented and evaluated deep fully convolutional neural networks using both standard training and training using an adversarial strategy. Results The synthetic attenuation maps were qualitatively and quantitatively consistent with the CT-based attenuation map. The globally normalized mean absolute error (NMAE) between the synthetic and CT-based attenuation maps were 3.60% ± 0.85% among the 25 testing subjects. The SPECT reconstructed images corrected using the CT-based attenuation map and synthetic attenuation map are highly consistent. The NMAE between the reconstructed SPECT images that were corrected using the synthetic and CT-based attenuation maps was 0.26% ± 0.15%, whereas the localized absolute percentage error was 1.33% ± 3.80% in the left ventricle (LV) myocardium and 1.07% ± 2.58% in the LV blood pool. Conclusion We developed a deep convolutional neural network to estimate attenuation maps for SPECT directly from the emission data. The proposed method is capable of generating highly reliable attenuation maps to facilitate attenuation correction for SPECT-only scanners for myocardial perfusion imaging.
In PET/CT imaging, CT is used for PET attenuation correction (AC). Mismatch between CT and PET due to patient body motion results in AC artifacts. In addition, artifact caused by metal, beam-hardening and count-starving in CT itself also introduces inaccurate AC for PET. Maximum likelihood reconstruction of activity and attenuation (MLAA) was proposed to solve those issues by simultaneously reconstructing tracer activity (λ-MLAA) and attenuation map (μ-MLAA) based on the PET raw data only. However, μ-MLAA suffers from high noise and λ-MLAA suffers from large bias as compared to the reconstruction using the CT-based attenuation map (μ-CT). Recently, a convolutional neural network (CNN) was applied to predict the CT attenuation map (μ-CNN) from λ-MLAA and μ-MLAA, in which an image-domain loss (IM-loss) function between the μ-CNN and the ground truth μ-CT was used. However, IM-loss does not directly measure the AC errors according to the PET attenuation physics, where the line-integral projection of the attenuation map (μ) along the path of the two annihilation events, instead of the μ itself, is used for AC. Therefore, a network trained with the IM-loss may yield suboptimal performance in the μ generation. Here, we propose a novel line-integral projection loss (LIP-loss) function that incorporates the PET attenuation physics for μ generation. Eighty training and twenty testing datasets of whole-body 18 F-FDG PET and paired ground truth μ-CT were used. Quantitative evaluations showed that the model trained with the additional LIP-loss was able to significantly outperform the model trained solely based on the IM-loss function.
This paper proposes a concise and effective approach termed discriminative feature representation (DFR) for low dose computerized tomography (LDCT) image processing, which is currently a challenging problem in medical imaging field. This DFR method assumes LDCT images as the superposition of desirable high dose CT (HDCT) 3D features and undesirable noise-artifact 3D features (the combined term of noise and artifact features induced by low dose scan protocols), and the decomposed HDCT features are used to provide the processed LDCT images with higher quality. The target HDCT features are solved via the DFR algorithm using a featured dictionary composed by atoms representing HDCT features and noise-artifact features. In this study, the featured dictionary is efficiently built using physical phantom images collected from the same CT scanner as the target clinical LDCT images to process. The proposed DFR method also has good robustness in parameter setting for different CT scanner types. This DFR method can be directly applied to process DICOM formatted LDCT images, and has good applicability to current CT systems. Comparative experiments with abdomen LDCT data validate the good performance of the proposed approach.
Purpose: Dedicated cardiac SPECT scanners with cadmium-zinc-telluride (CZT) cameras have shown capabilities of shortened scan times or reduced radiation doses as well as improved image quality. Since most of the dedicated scanners do not have an integrated CT, image quantification with attenuation correction (AC) is challenging and artifacts are routinely encountered in daily clinical practice. In this work, we demonstrate a direct AC technique using deep learning (DL) for myocardial perfusion imaging (MPI). Methods:In an IRB-approved retrospective study, 100 cardiac SPECT/CT datasets with 99m Tctetrofosmin using a GE Discovery NM/CT 570c scanner were collected at the Yale New Haven Hospital. A U-Net-based network was used for generating attenuation-corrected SPECT (SPECTDL) directly from non-corrected SPECT (SPECTNC) without undergoing an additional image reconstruction step. The accuracy of SPECTDL was evaluated by voxel-wise and segment-wise analyses against the reference CT-based AC (SPECTCTAC) using American Heart Association 17 segments in the myocardium. Polar maps of representative (best/median/worst) cases were visually compared for illustrating potential benefits and pitfalls of the DL approach. Results:The voxel-wise correlations with SPECTCTAC were 92.2% ± 3.7 (slope = 0.87; R 2 = 0.81) and 97.7% ± 1.8 (slope = 0.94; R 2 = 0.91) for SPECTNC and SPECTDL, respectively. The segmental errors of SPECTNC scattered from -35% up to 21% (p < 0.001); while, the errors of SPECTDL stayed mostly within ±10% (p < 0.001). The average segmental errors (mean±SD) were -6.11 ± 8.06% and 0.49 ± 4.35% for SPECTNC and SPECTDL, respectively. The average absolute segmental errors were 7.96 ± 6.23% and 3.31 ± 2.87% for SPECTNC and SPECTDL, respectively. Review of polar maps revealed successful demonstration of reduced attenuation artifacts; however, the performance of SPECTDL was not consistent for all subjects likely due to different amount of attenuation and uptake patterns. Conclusion:We demonstrated the feasibility of direct AC using DL for SPECT MPI. Overall, our DL approach reduced attenuation artifacts substantially compared to SPECTNC, justifying further studies to establish safety and consistency for clinical applications in stand-alone SPECT systems suffered from attenuation artifacts.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.