2016
DOI: 10.1109/tcyb.2015.2452577
|View full text |Cite
|
Sign up to set email alerts
|

View Transformation Model Incorporating Quality Measures for Cross-View Gait Recognition

Abstract: Cross-view gait recognition authenticates a person using a pair of gait image sequences with different observation views. View difference causes degradation of gait recognition accuracy, and so several solutions have been proposed to suppress this degradation. One useful solution is to apply a view transformation model (VTM) that encodes a joint subspace of multiview gait features trained with auxiliary data from multiple training subjects, who are different from test subjects (recognition targets). In the VTM… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
53
0

Year Published

2017
2017
2020
2020

Publication Types

Select...
5
1

Relationship

0
6

Authors

Journals

citations
Cited by 85 publications
(57 citation statements)
references
References 55 publications
(107 reference statements)
0
53
0
Order By: Relevance
“…Following the protocol of [21,23,24] (publicly available at http://www.am.sanken.osaka-u.ac.jp/BiometricDB/dataset/GaitLP/Benchmarks.html), five 2-fold cross validations were performed. During each training phase, 956 × (956-1) = 912,980 intra-class samples and 956 × 1 = 956 inter-class samples were used for training Joint Bayesian.…”
Section: Results For the Cross-view Settingmentioning
confidence: 99%
See 4 more Smart Citations
“…Following the protocol of [21,23,24] (publicly available at http://www.am.sanken.osaka-u.ac.jp/BiometricDB/dataset/GaitLP/Benchmarks.html), five 2-fold cross validations were performed. During each training phase, 956 × (956-1) = 912,980 intra-class samples and 956 × 1 = 956 inter-class samples were used for training Joint Bayesian.…”
Section: Results For the Cross-view Settingmentioning
confidence: 99%
“…For the first setting, all the subjects were used to evaluate the performance of our proposed DeepGait, so that the result could be reliable in a statistical manner. For the second setting, we used a subset of the OULP dataset following the protocol of [21,23,24] for comparison. For further comparison, experimental results, learning models, and test codes are released in Supplementary Materials.…”
Section: Methodsmentioning
confidence: 99%
See 3 more Smart Citations