In recent decades, gait recognition has garnered a lot of attention from the researchers in the IT era. Gait recognition signifies verifying or identifying the individuals by their walking style. Gait supports in surveillance system by identifying people when they are at a distance from the camera and can be used in numerous computer vision and surveillance applications. This paper proposes a stupendous Color-mapped Contour Gait Image (CCGI) for varying factors of Cross-View Gait Recognition (CVGR). The first contour in each gait image sequence is extracted using a Combination of Receptive Fields (CORF) contour tracing algorithm which extracts the contour image using Difference of Gaussians (DoG) and hysteresis thresholding. Moreover, hysteresis thresholding detects the weak edges from the total pixel information and provides more well-balanced smooth features compared to an absolute one. Second CCGI encodes the spatial and temporal information via color mapping to attain the regularized contour images with fewer outliers. Based on the front view of a human walking pattern, the appearance of cross-view variations would reduce drastically with respect to a change of view angles. This proposed work evaluates the performance analysis of CVGR using Deep Convolutional Neural Network (CNN) framework. CCGI is considered a gait feature for comparing and evaluating the robustness of our proposed model. Experiments conducted on CASIA-B database show the comparisons of previous methods with the proposed method and achieved 94.65% accuracy with a better recognition rate.
Gait recognition has been considered as the emerging biometric technology for identifying the walking behaviors of humans. The major challenges addressed in this article is significant variation caused by covariate factors such as clothing, carrying conditions and view angle variations will undesirably affect the recognition performance of gait. In recent years, deep learning technique has produced a phenomenal performance accuracy on various challenging problems based on classification. Due to an enormous amount of data in the real world, convolutional neural network will approximate complex nonlinear functions in models to develop a generalized deep convolutional neural network (DCNN) architecture for gait recognition. DCNN can handle relatively large multiview datasets with or without using any data augmentation and fine‐tuning techniques. This article proposes a color‐mapped contour gait image as gait feature for addressing the variations caused by the cofactors and gait recognition across views. We have also compared the various edge detection algorithms for gait template generation and chosen the best from among them. The databases considered for our work includes the most widely used CASIA‐B dataset and OULP database. Our experiments show significant improvement in the gait recognition for fixed‐view, crossview, and multiview compared with the recent methodologies.
Sensors of different wavelengths in remote sensing field capture data. Each and every sensor has its own capabilities and limitations. Synthetic aperture radar (SAR) collects data that has a high spatial and radiometric resolution. The optical remote sensors capture images with good spectral information. Fused images from these sensors will have high information when implemented with a better algorithm resulting in the proper collection of data to predict weather forecasting, soil exploration, and crop classification. This work encompasses a fusion of optical and radar data of Sentinel series satellites using a deep learning-based convolutional neural network (CNN). The three-fold work of the image fusion approach is performed in CNN as layered architecture covering the image transform in the convolutional layer, followed by the activity level measurement in the max pooling layer. Finally, the decision-making is performed in the fully connected layer. The objective of the work is to show that the proposed deep learning-based CNN fusion approach overcomes some of the difficulties in the traditional image fusion approaches. To show the performance of the CNN-based image fusion, a good number of image quality assessment metrics are analyzed. The consequences demonstrate that the integration of spatial and spectral information is numerically evident in the output image and has high robustness. Finally, the objective assessment results outperform the state-of-the-art fusion methodologies.
Sensors of different wavelengths in remote sensing field capture data. Each and every sensor has its own capabilities and limitations. Synthetic aperture radar (SAR) collects data that has a high spatial and radiometric resolution. The optical remote sensors capture images with good spectral information. Fused images from these sensors will have high information when implemented with a better algorithm resulting in the proper collection of data to predict weather forecasting, soil exploration, and crop classification. This work encompasses a fusion of optical and radar data of Sentinel series satellites using a deep learning-based convolutional neural network (CNN). The three-fold work of the image fusion approach is performed in CNN as layered architecture covering the image transform in the convolutional layer, followed by the activity level measurement in the max pooling layer. Finally, the decision-making is performed in the fully connected layer. The objective of the work is to show that the proposed deep learning-based CNN fusion approach overcomes some of the difficulties in the traditional image fusion approaches. To show the performance of the CNN-based image fusion, a good number of image quality assessment metrics are analyzed. The consequences demonstrate that the integration of spatial and spectral information is numerically evident in the output image and has high robustness. Finally, the objective assessment results outperform the state-of-the-art fusion methodologies.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.