Abstract:The aim of multifocus image fusion is to fuse the images taken from the same scene with different focuses to obtain a resultant image with all objects in focus. In this paper, a novel multifocus image fusion method based on human visual system (HVS) and back propagation (BP) neural network is presented. Three features which reflect the clarity of a pixel are firstly extracted and used to train a BP neural network to determine which pixel is clearer. The clearer pixels are then used to construct the initial fus… Show more
“…For the high frequency coefficients, the most popular fusion rule is to select the coefficients with larger absolute values, but this rule does not take any consideration of the surrounding pixels. The SML operator is developed to provide local measures of the quality of image focus [ 29 ]. In [ 33 ], it is proved that the SML is very efficient in the transform domain.…”
Section: The Proposed Image Fusion Methodsmentioning
confidence: 99%
“…The framework is divided into visual contrast based DTCWT based initial fusion and block residual based final fusion processes. In the visual contrast based DTCWT-based initial fusion process, the Sum-Modified-Laplacian (SML)-based visual contrast [ 29 ] and SML [ 30 ] are employed as the rules for low- and high-frequency coefficients in DTCWT domain, respectively. Using this model, the most important feature information is selected in the fused coefficients.…”
This paper presents a novel framework for the fusion of multi-focus images explicitly designed for visual sensor network (VSN) environments. Multi-scale based fusion methods can often obtain fused images with good visual effect. However, because of the defects of the fusion rules, it is almost impossible to completely avoid the loss of useful information in the thus obtained fused images. The proposed fusion scheme can be divided into two processes: initial fusion and final fusion. The initial fusion is based on a dual-tree complex wavelet transform (DTCWT). The Sum-Modified-Laplacian (SML)-based visual contrast and SML are employed to fuse the low- and high-frequency coefficients, respectively, and an initial composited image is obtained. In the final fusion process, the image block residuals technique and consistency verification are used to detect the focusing areas and then a decision map is obtained. The map is used to guide how to achieve the final fused image. The performance of the proposed method was extensively tested on a number of multi-focus images, including no-referenced images, referenced images, and images with different noise levels. The experimental results clearly indicate that the proposed method outperformed various state-of-the-art fusion methods, in terms of both subjective and objective evaluations, and is more suitable for VSNs.
“…For the high frequency coefficients, the most popular fusion rule is to select the coefficients with larger absolute values, but this rule does not take any consideration of the surrounding pixels. The SML operator is developed to provide local measures of the quality of image focus [ 29 ]. In [ 33 ], it is proved that the SML is very efficient in the transform domain.…”
Section: The Proposed Image Fusion Methodsmentioning
confidence: 99%
“…The framework is divided into visual contrast based DTCWT based initial fusion and block residual based final fusion processes. In the visual contrast based DTCWT-based initial fusion process, the Sum-Modified-Laplacian (SML)-based visual contrast [ 29 ] and SML [ 30 ] are employed as the rules for low- and high-frequency coefficients in DTCWT domain, respectively. Using this model, the most important feature information is selected in the fused coefficients.…”
This paper presents a novel framework for the fusion of multi-focus images explicitly designed for visual sensor network (VSN) environments. Multi-scale based fusion methods can often obtain fused images with good visual effect. However, because of the defects of the fusion rules, it is almost impossible to completely avoid the loss of useful information in the thus obtained fused images. The proposed fusion scheme can be divided into two processes: initial fusion and final fusion. The initial fusion is based on a dual-tree complex wavelet transform (DTCWT). The Sum-Modified-Laplacian (SML)-based visual contrast and SML are employed to fuse the low- and high-frequency coefficients, respectively, and an initial composited image is obtained. In the final fusion process, the image block residuals technique and consistency verification are used to detect the focusing areas and then a decision map is obtained. The map is used to guide how to achieve the final fused image. The performance of the proposed method was extensively tested on a number of multi-focus images, including no-referenced images, referenced images, and images with different noise levels. The experimental results clearly indicate that the proposed method outperformed various state-of-the-art fusion methods, in terms of both subjective and objective evaluations, and is more suitable for VSNs.
“…Mutual information provides the information quantity detail of input images, which are merged in the resultant image. The highest Mutual information represents the effectiveness of the IF technique [1]. This metric is represented as…”
Section: ) Mutual Information (Mi)mentioning
confidence: 99%
“…It is designed by modeling any contrast distortion and radiometric. It is combination of the luminance image distortion and combination of contrast distortion, loss correlation and structure distortion between source images and the final image [1,9]. This metric is defined as follow:…”
Section: ) Structured Similarity Index (Ssim)mentioning
confidence: 99%
“…As per literature [15][16][17][18][19][20], many IF frameworks have been developed for the fusion of diverse images in medical domain. It is generally categorized as pixel, feature and decision level [1]. Pixel level image fusion is performed at the lowest level which is the simplest of all fusion methods.…”
In medical domain, various multimodalities such as Computer tomography (CT) and Magnetic resonance imaging (MRI) are integrated into a resultant fused image. Image fusion (IF) is a method by which vital information can be preserved by extracting all important information from the multiple images into the resultant fused image. The analytical and visual image quality can be enhanced by the integration of different images. In this paper, a new algorithm has been proposed on the basis of guided filter with new fusion rule for the fusion of different imaging modalities such as MRI and Fluorodeoxyglucose images of brain for the detection of tumor. The performance of the proposed method has been evaluated and compared with state-of-the-art image fusion techniques using various qualitative as well as quantitative evaluation metrics. From the results, it has been observed that more information has achieved on edges and content visibility is also high as compared to the other techniques which makes it more suitable for real applications. The experimental results are evaluated on the basis of with-reference and without-references metric such as standard deviation, entropy, peak signal to noise ratio, mutual information etc.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.