2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 2016
DOI: 10.1109/cvpr.2016.142
|View full text |Cite
|
Sign up to set email alerts
|

Similarity Learning with Spatial Constraints for Person Re-identification

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

1
171
0

Year Published

2017
2017
2020
2020

Publication Types

Select...
5
3

Relationship

0
8

Authors

Journals

citations
Cited by 304 publications
(172 citation statements)
references
References 27 publications
1
171
0
Order By: Relevance
“…In Table 1 only SS-SVM [16] is a metric that tries to model the transform modal for each individual person; however, it never paid attention to acquire resistance against impostors and thus has 19.21% lower rank@1 accuracy than IRM3 + CVI ( = 15). Though IRM3 has successful results, still it has 1.36% lower rank@1 than SCSP [38]. Obviously, VIPeR has large pose, misalignment, and body parts displacement issues which are specifically not addressed in our work and, thus, is necessarily needed to improve the matching and results largely.…”
Section: Results On Vipermentioning
confidence: 99%
“…In Table 1 only SS-SVM [16] is a metric that tries to model the transform modal for each individual person; however, it never paid attention to acquire resistance against impostors and thus has 19.21% lower rank@1 accuracy than IRM3 + CVI ( = 15). Though IRM3 has successful results, still it has 1.36% lower rank@1 than SCSP [38]. Obviously, VIPeR has large pose, misalignment, and body parts displacement issues which are specifically not addressed in our work and, thus, is necessarily needed to improve the matching and results largely.…”
Section: Results On Vipermentioning
confidence: 99%
“…A growing number of Deep learning based methods are proposed to address cross-view person Re-Id in two main aspects: feature extraction and metric learning. he first category of methods [7,92,58,89,1,68,42] aims to generate effective discriminative representations via learning common or relevant visual features from cross-view samples to combat the view variations. The other category of approaches [10,39,43,41,44,107,3,96,37] employ a variety of different hand-crafted visual features such as color histogram, local maximal occurrence and local binary patterns to learn a similarity metric to measure the visual similarity between samples.…”
Section: Results Gallerymentioning
confidence: 99%
“…However, the above metric learning methods just focus on a holistic metric, which discard the geometric structure of human objects and thus affect the discriminative power. To deal with the issue effectively, considering a relatively stable space distribution of human body parts such as head, torso, and legs, Chen et al [50] propose spatially constrained similarity learning using polynomial feature map (SCSP) for human re-id. The proposed method, which combines a global similarity metric for the whole human body image region and multiple local similarity metrics for associating local human body parts regions using multiple visual cues, executes human matching across cameras based on multiple polynomial-kernel feature maps to represent human image pairs, which aims to learn a similarity function that could yield high score so as to measure the similarity between human image descriptors across cameras.…”
Section: Distance Metric Learningmentioning
confidence: 99%
“…To cope with this problem, Kuo et al [54] adopt multiple instances learning (MIL) to learn an appearance affinity model, which is then integrated with the spatial-temporal information to train an improved intercamera track association framework to tackle the target Fig. 12 Illustration of the similarity learning using spatial constraints based on polynomial-kernel feature map [50] Symbols √ and × mean whether CLM/GM based tracking is used or not handover tasks across cameras. In addition, people often walk in groups in crowded scenes, thus group information is also applied to appearance matching across cameras.…”
Section: Supervised Learning-based Clmmentioning
confidence: 99%
See 1 more Smart Citation