Image deblurring is a challenging field in computational photography and computer vision. In the deep learning era, deblurring methods boosted with neural networks achieve significant results. However, the existing methods mainly focus on solving specific image deblurring problem, and overlook the origin of the motion blur. In this paper, we revisit how blur occurs, and divide them into three categories, i.e. caused by relative motion between camera and scene, caused by the movement of the object itself and the edges of a blurring image, which may meet discontinuity because of the pixels trajectory sampled outside the image. To address the issues of different blurs in an image, we propose a two-stage neural network for image deblurring named RAID-Net. In order to remove the global blurry region caused by camera movements, we first use a U-shape network to get the coarse deblurred image. Then we leverage an adaptive reasoning module to model the relationship between different blurry regions within one image jointly and remove the other two categories of motion blur. Experiments on two public benchmark datasets demonstrate that our method achieves comparable or better results over the state-of-the-art methods.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.