A critical aim of pansharpening is to fuse coherent spatial and spectral features from panchromatic and multispectral images respectively. This study proposes deep siamese network based pansharpening model as a two-stage framework in a multiscale setting. In the first stage, a siamese network learns a common feature space between panchromatic and multispectral bands. The second stage follows by fusing the output feature maps of the siamese network. The parameters of these two stages are shared across scales in order to add spatial information consistently (across scales). The spectral information is preserved by adding appropriate skip connections from input multispectral image. Multi-level network parameters sharing mechanism in pyramidal reconstruction of pansharpened image, better preserves spatial and spectral details simultaneously. Experimental work carried out using deep siamese network in multi-scale setting (to obtain inter-band similarity among different sensor data) outperforms several latest pansharpening methods.INDEX TERMS Pansharpening, image fusion, deep-learning, siamese networks, remote sensing, depth-of-field.
In the current epidemic, the whole world is suffering with the infectious disease i.e., Corona Virus Disease (COVID-19). It is important to wear a mask to minimize the transmission of the disease. When everyone is wearing a face mask, it is difficult for recognition systems to recognize the masked face of a specific person. As some of the facial features are covered behind the mask e.g., mouth and nose. Therefore, the face-recognizing systems are inefficient to recognize the masked faces. To solve this issue, a face recognition system is proposed to recognize masked and unmasked faces. Support vector machine (SVM) and Random Forest (RF) based classifiers are trained on the specific dataset and classifiers effectively recognize the masked and unmasked faces. The classifier recognizes the human facial features such as eyes, eyebrows, forehead, ears, and hair. The dataset is collected in the form of images for 28 classes with and without a face mask. The trained system will recognize the person, whether the person is wearing a mask or not. The recognition accuracy is approximately 98.2% for different classes and the proposed recognizer is also compared with the state of art existing techniques.
Retaining spatial characteristics of panchromatic image and spectral information of multispectral bands is a critical issue in pansharpening. This paper proposes a pyramid based deep fusion framework that preserves spectral and spatial characteristics at different scales. The spectral information is preserved by passing the corresponding low resolution multispectral image as residual component of the network at each scale. The spatial information is preserved by training the network at each scale with the high frequencies of panchromatic image alongside the corresponding low resolution multispectral image. The parameters of different networks are shared across the pyramid in order to add spatial details consistently across scales. The parameters are also shared across fusion layers within a network at a specific scale. Experiments suggest that the proposed architecture outperforms state of the art pansharpening models. The proposed model, code and dataset is publically available at github 1 .
Multi-focus image fusion aims at combining source information from differently focused images. Fusion of multi-focus images has great applications in machine vision. The paper focuses on removal of fence occlusions in multi-focus images. The proposed model extracts fence occlusion map using salient image features, and refined by morphological operators. Binary operators, and inpainting methods are used for fence removal and restoration. The proposed model estimates the fence area using statistical characteristics in focused regions. Similarly, binary filtering is used to perform thinning of enlarged areas for optimised restoration. The proposed model employs guided filtering for consistency verification. Fusion and restoration results are compared using several (reference and no-reference based) image quality metrics. Simulations show that proposed scheme achieves better results (visually and quantitatively) as compared to existing state-of-the-art techniques.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.