Background Since the potential health risks of the radiation generated by computer tomography (CT), concerns have been expressed on reducing the radiation dose. However, low‐dose CT (LDCT) images contain complex noise and artifacts, bringing uncertainty to medical diagnosis. Purpose Existing deep learning (DL)‐based denoising methods are difficult to fully exploit hierarchical features of different levels, limiting the effect of denoising. Moreover, the standard convolution kernel is parameter sharing and cannot be adjusted dynamically with input change. This paper proposes an LDCT denoising network using high‐level feature refinement and multiscale dynamic convolution to mitigate these problems. Methods The dual network structure proposed in this paper consists of the feature refinement network (FRN) and the dynamic perception network (DPN). The FDN extracts features of different levels through residual dense connections. The high‐level hierarchical information is transmitted to DPN to improve the low‐level representations. In DPN, the two networks' features are fused by local channel attention (LCA) to assign weights in different regions and handle CT images' delicate tissues better. Then, the dynamic dilated convolution (DDC) with multibranch and multiscale receptive fields is proposed to enhance the expression and processing ability of the denoising network. The experiments were trained and tested on the dataset “NIH‐AAPM‐Mayo Clinic Low‐Dose CT Grand Challenge,” consisting of 10 anonymous patients with normal‐dose abdominal CT and LDCT at 25% dose. In addition, external validation was performed on the dataset “Low Dose CT Image and Projection Data,” which included 300 chest CT images at 10% dose and 300 head CT images at 25% dose. Results Proposed method compared with seven mainstream LDCT denoising algorithms. On the Mayo dataset, achieved peak signal‐to‐noise ratio (PSNR): 46.3526 dB (95% CI: 46.0121–46.6931 dB) and structural similarity (SSIM): 0.9844 (95% CI: 0.9834–0.9854). Compared with LDCT, the average increase was 3.4159 dB and 0.0239, respectively. The results are relatively optimal and statistically significant compared with other methods. In external verification, our algorithm can cope well with ultra‐low‐dose chest CT images at 10% dose and obtain PSNR: 28.6130 (95% CI: 28.1680–29.0580 dB) and SSIM: 0.7201 (95% CI: 0.7101–0.7301). Compared with LDCT, PSNR/SSIM is increased by 3.6536dB and 0.2132, respectively. In addition, the quality of LDCT can also be improved in head CT denoising. Conclusions This paper proposes a DL‐based LDCT denoising algorithm, which utilizes high‐level features and multiscale dynamic convolution to optimize the network's denoising effect. This method can realize speedy denoising and performs well in noise suppression and detail preservation, which can be helpful for the diagnosis of LDCT.
Previous single-image super-resolution (SISR) methods assume that the blur kernel is known (e.g., bicubic) when degrading from high-resolution (HR) images to low-resolution (LR) images. They use a single degradation to train a model to restore HR images. However, the actual degradation in real-world is often unknown. It is difficult to deal with LR images caused by different degradations. To cope with the above situation, previous methods attempt to restore SR images using a blur kernel estimation structure that combines with a non-blind SR network. There are two problems that should be earnestly considered: (1) For accurate blur kernel estimation, insufficient correlation of consecutive kernels lead to an unsatisfied reconstruction result. (2) For ill-posed issue of image rebuild, a more efficient constraint condition is worth trying. To solve the two problems, we propose an iterative dual regression network for an adaptive and precision blur kernel estimation, which improves the speed of kernel estimation by learning a dual mapping. Specifically, we design a Predictor-Generator structure: the Predictor, through several iterations, searching for accurate kernels through intermediate kernels and generated SR images; the Generator, generating final SR images with the help of the predicted kernels. More importantly, the elaborately designed dual learning strategy can not only provide additional constraints for accurate kernel estimation but also reduce the domain gap between SR images and HR images. Experiments on synthetic degraded images and real-world images, our network is competitive in performance and superior in visual results.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.