Purpose: In recent years, health risks concerning high-dose x-ray radiation have become a major concern in dental computed tomography (CT) examinations. Therefore, adopting low-dose computed tomography (LDCT) technology has become a major focus in the CT imaging field. One of these LDCT technologies is downsampling data acquisition during low-dose x-ray imaging processes. However, reducing the radiation dose can adversely affect CT image quality by introducing noise and artifacts in the resultant image that can compromise diagnostic information. In this paper, we propose an artifact correction method for downsampling CT reconstruction based on deep learning. Method: We used clinical dental CT data with low-dose artifacts reconstructed by conventional filtered back projection (FBP) as inputs to a deep neural network and corresponding high-quality labeled normal-dose CT data during training. We trained a generative adversarial network (GAN) with Wasserstein distance (WGAN) and mean squared error (MSE) loss, called m-WGAN, to remove artifacts and obtain high-quality CT dental images in a clinical dental CT examination environment. Results: The experimental results confirmed that the proposed algorithm effectively removes lowdose artifacts from dental CT scans. In addition, we showed that the proposed method is efficient for removing noise from low-dose CT scan images compared to existing approaches. We compared the performances of the general GAN, convolutional neural networks, and m-WGAN. Through quantitative and qualitative analysis of the results, we concluded that the proposed m-WGAN method resulted in better artifact correction performance preserving the texture in dental CT scanning. Conclusions: The image quality evaluation metrics indicated that the proposed method effectively improves image quality when used as a postprocessing technique for dental CT images. To the best of our knowledge, this work is the first deep learning architecture used with a commercial cone-beam dental CT scanner. The artifact correction performance was rigorously evaluated and demonstrated to be effective. Therefore, we believe that the proposed algorithm represents a new direction in the research area of low-dose dental CT artifact correction.
In this paper, a single-computed tomography (CT) image super-resolution (SR) reconstruction scheme is proposed. This SR reconstruction scheme is based on sparse representation theory and dictionary learning of low- and high-resolution image patch pairs to improve the poor quality of low-resolution CT images obtained in clinical practice using low-dose CT technology. The proposed strategy is based on the idea that image patches can be well represented by sparse coding of elements from an overcomplete dictionary. To obtain similarity of the sparse representations, two dictionaries of low- and high-resolution image patches are jointly trained. Then, sparse representation coefficients extracted from the low-resolution input patches are used to reconstruct the high-resolution output. Sparse representation is used such that the trained dictionary pair can reduce computational costs. Combined with several appropriate iteration operations, the reconstructed high-resolution image can attain better image quality. The effectiveness of the proposed method is demonstrated using both clinical CT data and simulation image data. Image quality evaluation indexes (root mean squared error (RMSE) and peak signal-to-noise ratio (PSNR)) indicate that the proposed method can effectively improve the resolution of a single CT image.
Background: Recently, the paradigm of computed tomography (CT) reconstruction has shifted as the deep learning technique evolves. In this study, we proposed a new convolutional neural network (called ADAPTIVE-NET) to perform CT image reconstruction directly from a sinogram by integrating the analytical domain transformation knowledge. Methods: In the proposed ADAPTIVE-NET, a specific network layer with constant weights was customized to transform the sinogram onto the CT image domain via analytical back-projection. With this new framework, feature extractions were performed simultaneously on both the sinogram domain and the CT image domain. The Mayo low dose CT (LDCT) data was used to validate the new network.In particular, the new network was compared with the previously proposed residual encoder-decoder (RED)-CNN network. For each network, the mean square error (MSE) loss with and without VGGbased perceptual loss was compared. Furthermore, to evaluate the image quality with certain metrics, the noise correlation was quantified via the noise power spectrum (NPS) on the reconstructed LDCT for each method.Results: CT images that have clinically relevant dimensions of 512×512 can be easily reconstructed from a sinogram on a single graphics processing unit (GPU) with moderate memory size (e.g., 11 GB) by ADAPTIVE-NET. With the same MSE loss function, the new network is able to generate better results than the RED-CNN. Moreover, the new network is able to reconstruct natural looking CT images with enhanced image quality if jointly using the VGG loss. Conclusions:The newly proposed end-to-end supervised ADAPTIVE-NET is able to reconstruct highquality LDCT images directly from a sinogram.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.