Gait as a behavioural biometric is concerned with how people walk. However, most existing gait representations capture both motion and appearance information. They are thus sensitive to changes in various covariate conditions such as carrying and clothing. In this paper, a novel gait representation termed as Gait Entropy Image (GEnI) is proposed. Based on computing entropy, a GEnI encodes in a single image the randomness of pixel values in the silhouette images over a complete gait cycle. It thus captures mostly motion information and is robust to covariate condition changes that affect appearance. Extensive experiments on the USF HumanID dataset, CASIA dataset and the SOTON dataset have been carried out to demonstrate that the proposed gait representation outperforms existing methods, especially when there are significant appearance changes. Our experiments also show clear advantage of GEnI over the alternatives without the assumption on cooperative subjects, i.e. both the gallery and the probe sets consist of a mixture of gait sequences under different and unknown covariate conditions.
Among various factors that can affect the performance of gait recognition, changes in viewpoint pose the biggest problem. In this work, we develop a novel approach to cross-view gait recognition with the view angle of a probe gait sequence unknown. We formulate a Gaussian Process (GP) classification framework to estimate the view angle of each probe gait sequence. To measure the similarity of gait sequences captured at different view angles, we model the correlation of gait sequences from different views using Canonical Correlation Analysis (CCA) and use the correlation strength as similarity measure. This differs significantly from existing approaches, which reconstruct gait features in different views either through 2D view transformation or 3D calibration. Without explicit reconstruction, our approach can cope with feature mis-match across view and is more robust against feature noise. Our experiments validate that the proposed method significantly outperforms the existing state-of-the-art methods.
One of the major requirements of content based image retrieval (CBIR) systems is to ensure meaningful image retrieval against query images. The performance of these systems is severely degraded by the inclusion of image content which does not contain the objects of interest in an image during the image representation phase. Segmentation of the images is considered as a solution but there is no technique that can guarantee the object extraction in a robust way. Another limitation of the segmentation is that most of the image segmentation techniques are slow and their results are not reliable. To overcome these problems, a bandelet transform based image representation technique is presented in this paper, which reliably returns the information about the major objects found in an image. For image retrieval purposes, artificial neural networks (ANN) are applied and the performance of the system and achievement is evaluated on three standard data sets used in the domain of CBIR.
The strength of gait, compared to other biometrics, is that it does not require cooperative subjects. Previoius gait recognition approaches were evaluated using a gallery set consisting of gait sequences of people under similar covariate conditions (i.e. clothing, surface, carrying, and view conditions). This evaluation procedure, however, implies that the gait data are collected in a cooperative manner so that the covariate conditions are known a priori. In this work, the performance of state of the art gait recognition approaches are evaluated without the assumption on cooperative subjects, i.e. the gallery set consists of a mixture of gait sequences under different unknown covariate conditions. The results show that the performance of the existing approaches drop drastically under this more realistic experimental setup. We argue that selecting the most relevant gait features that are invariant to changes in gait covariate conditions is the key to develop a gait recognition system that works without subject cooperation. To that end, we propose a novel gait recognition approach, which performs automatic feature selection on each pair gallery and probe gait sequences, and seamlessly integrates feature selection with an Adaptive Component and Discriminant Analysis (ACDA) for fast recognition. Experiments are carried out to demonstrate that the proposed approach significantly outperforms the existing techniques.
In this paper we address the problem of selecting the most relevant features for human identification by gait. Although gait as a behavioral biometric is concerned with how people walk rather than how people look, most existing gait recognition approaches employ both shape and dynamics information for recognition. This is because shape, as a static appearance feature also contains useful information for identification. However, the inclusion of shape information in the gait features can also introduce variations that will hinder the recognition performance. To address this problem, we develop both supervised and unsupervised feature selection methods to extract the most relevant and informative features from Gait Energy Image (GEI) for human identification. Extensive experiments are carried out which indicate that our feature selection methods significantly improve the performance of gait recognition.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.