In this paper, we solve the problem of dynamic scenes deblurring with motion blur. Restoration of images in the presence of motion blur necessitates a network design that the receptive field can completely cover all areas that need to be deblurred, while the existing network increases the receptive field by continuously stacking the ordinary convolutional layer or increasing the size of the convolution kernel. However, these methods inevitably increase the computational burden of the network. We propose a novel architecture consisting of a channel adaptive residual module. Different features of the blurred image are extracted and distributed on each feature channel. Our network can calculate the weight of each channel through learning, and extract the image features adaptively according to different degrees of blurring and importance of information. We embed the module in a modified encoder-decoder design with skip connections to achieve multi-scale feature fusion for further performance improvement. The extensive comparison with the existing techniques in the baseline dynamic scene deblurring dataset shows that the proposed network can effectively realize image deblurring, and the accuracy and speed are comparable with the existing techniques.
The moving image deblurring method based on deep learning has achieved good results. However, some methods are not effective in restoring image texture detail information. Therefore, this paper proposes a High-Frequency Attention Residual Module (HFAR), which is used to guide the network to learn more high-frequency texture information in the image to improve the quality of image detail restoration. The designed attention residual module consists of two sub-modules, Fourier Channel Attention module (FCA) and Edge Spatial Attention module (ESA). The FCA module gives more weight to the feature maps that contain more high-frequency information in multiple channels. While the ESA module gives more weight to the areas in the feature maps which contain more high-frequency information to guide the network to learn image details and texture information. Extensive experiments on different datasets show that our method achieves state-of-the-art performance in motion deblurring.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.