Focus is limited and singular in many image capture devices. Therefore, different focused objects at different distances are obtained in a single image taken. Image fusion can be defined as the acquisition of multiple focused objects in a single image by combining important information from two or more images into a single image. In this paper, a new multi-focus image fusion method based on Bat Algorithm (BA) is presented in a Multi-Scale Transform (MST) to overcome limitations of standard MST Transform. Firstly, a specific MST (Laplacian Pyramid or Curvelet Transform) is performed on the two source images to obtain their low-pass and high-pass bands. Secondly, optimization algorithms were used to find out optimal weights for coefficients in low-pass bands to improve the accuracy of the fusion image and finally the fused multi-focus image is reconstructed by the inverse MST. The experimental results are compared with different methods using reference and non-reference evaluation metrics to evaluate the performance of image fusion methods.
Multimodal medical image fusion approaches have been commonly used to diagnose diseases and involve merging multiple images of different modes to achieve superior image quality also to reduce uncertainty and redundancy to increase the clinical applicability. In this paper, we proposed a new medical image fusion algorithm based on a Convolutional Neural Network (CNN) to obtain a weight map for Multiscale transform (Curvelet/ Non-Subsampled Shearlet Transform) domains that enhance the textual and edge property. The aim of the method is achieving the best visualization and highest details in a single fused image without losing spectral and anatomical details. In the proposed method, firstly, Non-Subsampled Shearlet Transform (NSST) and Curvelet transform (CvT) were used to decompose the source image into low-frequency and high-frequency coefficients. Secondly, the low-frequency and high-frequency coefficients were fused by the weight map generated by Siamese Convolutional Neural Network (SCNN), where the weight map get by a series of feature maps and fuses the pixel activity information from different sources. Finally, the fused image was reconstructed by inverse MST. For testing of proposed method, standard gray-scaled Magnetic Resonance (MR) images and colored Positron Emission Tomography (PET) images taken from Brain Atlas Datasets were used. The proposed method can effectively preserve the detailed structure information and performs well in both visual quality and objective assessment. The fusion experimental results were evaluated (according to quality metrics) with quantitative and qualitative criteria.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.