Background Computed tomography (CT) is widely used as an imaging tool to visualize three‐dimensional structures with expressive bone‐soft tissue contrast. However, CT resolution can be severely degraded through low‐dose acquisitions, highlighting the importance of effective denoising algorithms. Purpose Most data‐driven denoising techniques are based on deep neural networks, and therefore, contain hundreds of thousands of trainable parameters, making them incomprehensible and prone to prediction failures. Developing understandable and robust denoising algorithms achieving state‐of‐the‐art performance helps to minimize radiation dose while maintaining data integrity. Methods This work presents an open‐source CT denoising framework based on the idea of bilateral filtering. We propose a bilateral filter that can be incorporated into any deep learning pipeline and optimized in a purely data‐driven way by calculating the gradient flow toward its hyperparameters and its input. Denoising in pure image‐to‐image pipelines and across different domains such as raw detector data and reconstructed volume, using a differentiable backprojection layer, is demonstrated. In contrast to other models, our bilateral filter layer consists of only four trainable parameters and constrains the applied operation to follow the traditional bilateral filter algorithm by design. Results Although only using three spatial parameters and one intensity range parameter per filter layer, the proposed denoising pipelines can compete with deep state‐of‐the‐art denoising architectures with several hundred thousand parameters. Competitive denoising performance is achieved on x‐ray microscope bone data and the 2016 Low Dose CT Grand Challenge data set. We report structural similarity index measures of 0.7094 and 0.9674 and peak signal‐to‐noise ratio values of 33.17 and 43.07 on the respective data sets. Conclusions Due to the extremely low number of trainable parameters with well‐defined effect, prediction reliance and data integrity is guaranteed at any time in the proposed pipelines, in contrast to most other deep learning‐based denoising architectures.
Background The use of deep learning has successfully solved several problems in the field of medical imaging. Deep learning has been applied to the CT denoising problem successfully. However, the use of deep learning requires large amounts of data to train deep convolutional networks (CNNs). Moreover, due to the large parameter count, such deep CNNs may cause unexpected results. Purpose In this study, we introduce a novel CT denoising framework, which has interpretable behavior and provides useful results with limited data. Methods We employ bilateral filtering in both the projection and volume domains to remove noise. To account for nonstationary noise, we tune the σ parameters of the volume for every projection view and every volume pixel. The tuning is carried out by two deep CNNs. Due to the impracticality of labeling, the two‐deep CNNs are trained via a Deep‐Q reinforcement learning task. The reward for the task is generated by using a custom reward function represented by a neural network. Our experiments were carried out on abdominal scans for the Mayo Clinic the cancer imaging archive (TCIA) dataset and the American association of physicists in medicine (AAPM) Low Dose CT Grand Challenge. Results Our denoising framework has excellent denoising performance increasing the peak signal to noise ratio (PSNR) from 28.53 to 28.93 and increasing the structural similarity index (SSIM) from 0.8952 to 0.9204. We outperform several state‐of‐the‐art deep CNNs, which have several orders of magnitude higher number of parameters (p‐value [PSNR] = 0.000, p‐value [SSIM] = 0.000). Our method does not introduce any blurring, which is introduced by mean squared error (MSE) loss‐based methods, or any deep learning artifacts, which are introduced by wasserstein generative adversarial network (WGAN)‐based models. Our ablation studies show that parameter tuning and using our reward network results in the best possible results. Conclusions We present a novel CT denoising framework, which focuses on interpretability to deliver good denoising performance, especially with limited data. Our method outperforms state‐of‐the‐art deep neural networks. Future work will be focused on accelerating our method and generalizing it to different geometries and body parts.
Low-dose computed tomography (CT) denoising algorithms aim to enable reduced patient dose in routine CT acquisitions while maintaining high image quality. Recently, deep learning (DL)-based methods were introduced, outperforming conventional denoising algorithms on this task due to their high model capacity. However, for the transition of DL-based denoising to clinical practice, these data-driven approaches must generalize robustly beyond the seen training data. We, therefore, propose a hybrid denoising approach consisting of a set of trainable joint bilateral filters (JBFs) combined with a convolutional DL-based denoising network to predict the guidance image. Our proposed denoising pipeline combines the high model capacity enabled by DL-based feature extraction with the reliability of the conventional JBF. The pipeline’s ability to generalize is demonstrated by training on abdomen CT scans without metal implants and testing on abdomen scans with metal implants as well as on head CT data. When embedding RED-CNN/QAE, two well-established DL-based denoisers in our pipeline, the denoising performance is improved by 10%/82% (RMSE) and 3%/81% (PSNR) in regions containing metal and by 6%/78% (RMSE) and 2%/4% (PSNR) on head CT data, compared to the respective vanilla model. Concluding, the proposed trainable JBFs limit the error bound of deep neural networks to facilitate the applicability of DL-based denoisers in low-dose CT pipelines.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.