Low-dose computed tomography (LDCT) is crucial due to the risk of radiation exposure to patients. However, the high noise level in LDCT images may reduce the image quality, leading to a less accurate diagnosis. Deep learning technology, especially supervised methods, has recently been widely accepted as a powerful tool for LDCT image denoising tasks. However, supervised methods require numerous paired datasets of LDCT and high-quality pristine CT images, which are rarely available in real-world clinical scenarios. This study presents an unsupervised learning-based framework called MM-Net, consisting of two training steps for a volumetric LDCT denoising task. In the two-step training approach, we first train the initial denoising network multi-scale attention U-Net (MSAU-Net) in a self-supervised manner to predict the noise-suppressed center slice with a neighboring multi-slice input. The second training step aims to train the U-Net-based final denoiser based on the pre-trained MSAU-Net to improve the image quality by introducing new multi-patch and multi-mask matching loss. Qualitative visual inspection and quantitative measures across texturally different domains of clinical and animal data reveal that the proposed MM-Net outperformed all competing state-ofthe-art unsupervised algorithms. The unsupervised method also achieved denoising performance comparable to the representative supervised methods trained with ground truth images.
Fluoroscopy in a low-dose tube output is used to reduce the damage associated with radiation exposure. However, lowering the radiation dose inevitably increases random noise in x-ray images, resulting in poor diagnostic image quality, which requires noise reduction for accurate diagnosis. Also, in the case of non-static objects, the image is blurred due to motion. The most-used denoiser with a recursive filter (RF) preserves details well when applied to temporal data, but it is vulnerable to motion blur. Existing convolutional neural network (CNN)-based algorithms with single-frame input cannot use the temporary context, and others with multi-frame input are good for motion detection but poor for detail preservation. Therefore, we propose a motion-level-aware denoising framework to combine the results of RF- and CNN-based algorithms depending on the pixel-wise magnitude of motion to complement each other. The data we use are fluoroscopy images taken in continuous time, and we aim at many-to-one so that one frame is denoised by considering sequential frames. Also, since both RF- and CNN-based algorithms used in our architecture are many-to-one methods, they can consider spatiotemporal information. In the multi-frame input, the difference in intensity of each pixel between frames is calculated to obtain a moving map. Depending on the factor value from the moving map, the final image is obtained by reflecting the outputs of the RF- and CNN-based algorithms. If the factor value is high, the pixel intensity of the final image is like the CNN-based output, which is good for motion detection, and vice versa, it more reflects the intensity of RF output, which is excellent in perceptual quality. Therefore, it prevents motion blur and does not over-smooth microdetails, such as bones and muscles. The results show that combining the two outputs together records higher peak signal-to-noise ratio (PSNR) and has better perceptual quality for diagnosis than using only one method. Furthermore, our combining method can output x-ray images of higher quality by using more advanced networks in future fluoroscopy denoising studies, since the proposed denoising framework is not only applicable to specific architectures used in this study but can also be broadly applied to other alternative networks.
Background To obtain phase-contrast X-ray images, single-grid imaging systems are effective, but Moire artifacts remain a significant issue. The solution for removing Moire artifacts from an image is grid rotation, which can distinguish between these artifacts and sample information within the Fourier space. However, the mechanical movement of grid rotation is slower than the real-time change in Moire artifacts. Thus, Moire artifacts generated during real-time imaging cannot be removed using grid rotation. To overcome this problem, we propose an effective method to obtain phase-contrast X-ray images using instantaneous frequency and noise filtering. Result The proposed phase-contrast X-ray image using instantaneous frequency and noise filtering effectively suppressed noise with Moire patterns. The proposed method also preserved the clear edge of the inner and outer boundaries and internal anatomical information from the biological sample, outperforming conventional Fourier analysis-based methods, including absorption, scattering, and phase-contrast X-ray images. In particular, when comparing the phase information for the proposed method with the x-axis gradient image from the absorption image, the proposed method correctly distinguished two different types of soft tissue and the detailed information, while the latter method did not. Conclusion This study successfully achieved a significant improvement in image quality for phase-contrast X-ray images using instantaneous frequency and noise filtering. This study can provide a foundation for real-time bio-imaging research using three-dimensional computed tomography.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
334 Leonard St
Brooklyn, NY 11211
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.