The current trend in Image Fusion (IF) algorithms concentrate on the fusion process alone. However, pay less attention to critical issues such as the similarity between the two input images, features that participate in the Image Fusion. This paper addresses these two issues by deliberately attempting a new Image Fusion framework with Convolutional Neural Network (CNN). CNN has features like pre-training and similarity score, but functionalities are limited. A CNN model with classification prediction and similarity estimation are introduced as Classification Similarity Networks (CSN) to address these issues. ResNet50 and GoogLeNet are modified as the classification branches of CSN v1, CSN v2, respectively, to reduce feature dimensions. IF rules depend on the input dataset to fusion the extracted features. The output of the fusion process is fed into CSN v3 to improve the output image quality. The proposed CSN model is pre-trained and Fully Convolutional. At the time of IF, consider the similarities between the input images. This model applies to Multi-Focus, Multi-Modal Medical, Infrared-Visual and Multi-Exposure image datasets, and analyzed outcomes. The suggested model shows a significant improvement than the modern IF algorithms.
An image is a two-dimensional function that is expressed through spatial coordinates X, Y. At any pair of coordinates (x, y), the amplitude of a point is called the intensity of that pixel. Digital Image comprises a predictable number of components, each of which has a precise value at a given region. Those components are called pixels. Image Fusion is a phenomenon of transforming data from two or more images of a scenario into a single, more descriptive image taken than both of the input images, and is more appropriate for information processing. Image Fusion (IF) has been utilized in numerous application regions/areas. Remote Sensing Satellites (RSS) produce different images based on their sensory characteristics. Among those images, Panchromatic (PAN) and Multi-Spectral (MS) images are widely used in Satellite Image Fusion (SIF). The Image Fusion (IF) techniques are broadly classified as methods for the Spatial and Frequency domains. Wavelet Fusion Techniques (WFT) based on the Frequency-Domain (FD) are having applications in medical, space, and military applications. This literature delivers a study of some of the Image Fusion (IF) techniques. Remote Sensing Image (RSI) and Data Fusion (DF) seeks to merge the data acquired from sensors installed on satellites, airborne platforms, and ground-based sensors with specific spatial, spectral and temporal resolutions to produce merged data containing more accurate information than is found in each of the individual data sources.
The pupil recognition method is helpful in many real-time systems, including ophthalmology testing devices, wheelchair assistance, and so on. The pupil detection system is a very difficult process in a wide range of datasets due to problems caused by varying pupil size, occlusion of eyelids, and eyelashes. Deep Convolutional Neural Networks (DCNN) are being used in pupil recognition systems and have shown promising results in terms of accuracy. To improve accuracy and cope with larger datasets, this research work proposes BOC (BAT Optimized CNN)-IrisNet, which consists of optimizing input weights and hidden layers of DCNN using the evolutionary BAT algorithm to efficiently find the human eye pupil region. The proposed method is based on very deep architecture and many tricks from recently developed popular CNNs. Experiment results show that the BOC-IrisNet proposal can efficiently model iris microstructures and provides a stable discriminating iris representation that is lightweight, easy to implement, and of cutting-edge accuracy. Finally, the region-based black box method for determining pupil center coordinates was introduced. The proposed architecture was tested using various IRIS databases, including the CASIA (Chinese academy of the scientific research institute of automation) Iris V4 dataset, which has 99.5% sensitivity and 99.75% accuracy, and the IIT (Indian Institute of Technology) Delhi dataset, which has 99.35% specificity and MMU (Multimedia University) 99.45% accuracy, which is higher than the existing architectures.
Remote Sensing Images (RSI) are captured by the satellites. The quality of the RSIs primarily depends on environmental conditions and image-capturing device capability. Rapid development in technology leads to the generation of High- Resolution (HR) images from satellites. However, these images are to be processed in a scientific way for the best results. A new Image Fusion (IF) technique with the help of wavelets, Deep Convolutional Generative Adversarial Networks (DCGAN), was designed to get super-resolution images for satellite images. Residual Convolution Neural Network (ResNet) increases the fused image accuracy by minimizing the vanishing gradient problem. Peak Signal to Noise Ratio (PSNR), Structural Similarity Index Method (SSIM), Feature Similarity Index Method (FSIM), and Universal Image Quality (UIQ) are taken as the metrics for comparing the results with other models. The experimental results are better than previous methods and minimize the spatial and spectral losses during the fusion.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.