Image fusion is a visual enhancement technique that combines source images from different sensors to produce a more robust and informative fused image for subsequent processing or decision making. Infrared and visible light images share complementary properties that enable the production of robust and informative fused images. This paper proposed an infrared and visible image fusion method that improved the tetrolet framework to improve infrared and visible image fusion quality. First, the source image is enhanced by bicubic interpolation. The improved tetrolet transform then decomposes the enhanced source image; the high-frequency components are fused by convolutional sparse representation theory and combined with corresponding rules, and the low-frequency components are fused by defining ISER descriptors. Finally, we use the inverse transform to reconstruct the fused image. Qualitative and quantitative experimental results on five groups of typical infrared and visible image datasets demonstrate the proposed method's effectiveness. The proposed method exhibits better performances on subjective vision and objective indexes compared with the other state-of-the-art methods.INDEX TERMS Image fusion, improved tetrolet transform, convolutional sparse representation, ISER descriptor.
A novel infrared and visible image fusion method in a multilevel low-rank decomposition framework based on guided filtering and feature extraction is proposed to address the lack of edge information and blurred details in fused images. Based on multilevel low-rank decomposition, the fusion strategy of base part and detail contents has been improved. Firstly, the source infrared and visible images are decomposed to the base part coefficients and
n
-level detail content coefficients by multilevel low-rank decomposition. Secondly, the base part coefficients are learned by the VGG-19 network to get the weight map, and then, the improved weight map is obtained by guided filtering, and the coefficients of the base part are fused to acquire the fused base part coefficients. The
n
-level detail content coefficients are fused using the rule of dynamic level measurement with maximum value and then reconstructed to obtain the final fused detail content coefficients. Finally, the fused base part and detailed content information are superimposed to get the final fusion result. The results show that the fusion algorithm can effectively preserve the edge and detail features of the source image. Compared with other state-of-the-art fusion methods, the proposed method performs better in objective assessment and visual quality. The average value of evaluation metrics
E
N
and
M
I
have been improved by 0.5337 and 1.0673 on the six pair images.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.