Multi-focus image fusion is an image processing that generates an integrated image by merging multiple images from different focus area in the same scene. For most fusion methods, the detection of the focus area is a critical step. In this paper, we propose a multi-focus image fusion algorithm based on a dual convolutional neural network (DualCNN), in which the focus area is detected from super-resolved images. Firstly, the source image is input into a DualCNN to restore the details and structure from its superresolved image, as well as to improve the contrast of the source image. Secondly, the bilateral filter is used to reduce noise on the fused image, and the guided filter is used to detect the focus area of the image and refine the decision map. Finally, the fused image is obtained by weighting the source image according to the decision map. Experimental results show that our algorithm can well retain image details and maintain spatial consistency. Compared with existing methods in multiple groups of experiments, our algorithm can achieve better visual perception according to subjective evaluation and objective indexes. INDEX TERMS Multi-focus image fusion, super-resolution, dual convolutional neural network, focus area detection.
In this paper, a remote sensing image fusion method is presented since sparse representation (SR) has been widely used in image processing, especially for image fusion. Firstly, we used source images to learn the adaptive dictionary, and sparse coefficients were obtained by sparsely coding the source images with the adaptive dictionary. Then, with the help of improved hyperbolic tangent function (tanh) and l 0 − max , we fused these sparse coefficients together. The initial fused image can be obtained by the image fusion method based on SR. To take full advantage of the spatial information of the source images, the fused image based on the spatial domain (SF) was obtained at the same time. Lastly, the final fused image could be reconstructed by guided filtering of the fused image based on SR and SF. Experimental results show that the proposed method outperforms some state-of-the-art methods on visual and quantitative evaluations.
In order to obtain a panoramic image which is clearer, and has more layers and texture features, we propose an innovative multi-focus image fusion algorithm by combining with non-subsampled shearlet transform (NSST) and residual network (ResNet). First, NSST decomposes a pair of input images to produce subband coefficients of different frequencies for subsequent feature processing. Then, ResNet is applied to fuse the low frequency subband coefficients, and improved gradient sum of Laplace energy (IGSML) perform high frequency feature information processing. Finally, the inverse NSST is performed on the fused coefficients of different frequencies to obtain the final fused image. In our method, we fully consider the low frequency global features and high frequency detail information in image by using NSST. For low-frequency coefficients fusion, we can also obtain the spatial information features of low-frequency coefficient images by using ResNet, which has a deep network structure. IGSML can use different directional gradients to process high-frequency subband coefficients of different levels and directions, which is more conducive to the fusion of the coefficients. The experiment results show that the proposed method has been improved in the structural features and edge texture in the fusion images. INDEX TERMS Image fusion, multi-focus image fusion, NSST, ResNet. YIFEI WU received the B.S. degree from Xidian University, Xi'an, China, in 2018. He is currently pursuing the M.S. degree with the
In this paper, we propose a boosting synthetic aperture radar (SAR) image despeckling method based on non-local weighted group low-rank representation (WGLRR). The spatial structure information of SAR images leads to the similarity of the patches. Furthermore, the data matrix grouped by the similar patches within the noise-free SAR image is often low-rank. Based on this, we use low-rank representation (LRR) to recover the noise-free group data matrix. To maintain the fidelity of the recovered image, we integrate the corrupted probability of each pixel into the group LRR model as a weight to constrain the fidelity of recovered noise-free patches. Each single patch might belong to several groups, so different estimations of each patch are aggregated with a weighted averaging procedure. The residual image contains signal leftovers due to the imperfect denoising, so we strengthen the signal by leveraging on the availability of the denoised image to suppress noise further. Experimental results on simulated and actual SAR images show the superior performance of the proposed method in terms of objective indicators and of perceived image quality.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.