2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 2017
DOI: 10.1109/cvpr.2017.145
|View full text |Cite
|
Sign up to set email alerts
|

Beyond Triplet Loss: A Deep Quadruplet Network for Person Re-identification

Abstract: Person re-identification (ReID) is an important task in wide area video surveillance which focuses on identifying people across different cameras. Recently, deep learning networks with a triplet loss become a common framework for person ReID. However, the triplet loss pays main attentions on obtaining correct orders on the training set. It still suffers from a weaker generalization capability from the training set to the testing set, thus resulting in inferior performance. In this paper, we design a quadruplet… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
678
0
2

Year Published

2018
2018
2022
2022

Publication Types

Select...
6
1

Relationship

0
7

Authors

Journals

citations
Cited by 1,087 publications
(681 citation statements)
references
References 45 publications
1
678
0
2
Order By: Relevance
“…Though, deep features (DF) and deep matching networks (DMN) have no match with conventional metric learning methods, however, from the results in Table 1 it is clearly evident if two major issues of reidentification (i.e., multimodal transforms, and strong rejection capability against impostors) can be well handled simultaneously, then comparable or even higher performance than deep methods can be attained. Our IRM3 + CVI ( = 15) has 7.1% and 4.94% higher rank@1 than QuadrupletNet [33] and JLML [34], respectively. These obtained results demonstrate the fact that for smaller dataset like VIPeR deep matching networks have insufficient training samples to learn a discriminative network.…”
Section: Results On Vipermentioning
confidence: 99%
See 2 more Smart Citations
“…Though, deep features (DF) and deep matching networks (DMN) have no match with conventional metric learning methods, however, from the results in Table 1 it is clearly evident if two major issues of reidentification (i.e., multimodal transforms, and strong rejection capability against impostors) can be well handled simultaneously, then comparable or even higher performance than deep methods can be attained. Our IRM3 + CVI ( = 15) has 7.1% and 4.94% higher rank@1 than QuadrupletNet [33] and JLML [34], respectively. These obtained results demonstrate the fact that for smaller dataset like VIPeR deep matching networks have insufficient training samples to learn a discriminative network.…”
Section: Results On Vipermentioning
confidence: 99%
“…Only K-LFDA when trained with mom LE [24] feature attains comparable performance than DMN. However, motivated to resolve the challenges for reidentification in real world (i.e., multimodal image space, and diverse impostors) IRM3 + CVI ( = 15) has much better results than MCP-CNN [39], E2E-CAN [31], Quadruplet-Net [33], and JLML [34], while our IRM3 + CVI ( = 15) has 1.49% higher rank@1 than DLPA [32]. DLPA extracts deep features by semantically aligning body parts, as well as rectifying pose variations.…”
Section: Results On Cuhk01mentioning
confidence: 99%
See 1 more Smart Citation
“…Triplet loss is also widely used to learn fine-grained similarity image metrics . Quadruplet loss Chen et al (2017c) strengthens the generalization capability and leads the model to output with a larger inter-class variation and a smaller intra-class variation superior to triplet loss.…”
Section: Siamese Neural Network Architecturementioning
confidence: 99%
“…We use the database of CASIA (Yu et al, 2009;Chen et al, 2017c) to train our network model for the task of person re-identification. The task of person re-identification achieved 88.24% top-1 accuracy, mAP = 70.68% only with softmax loss.…”
Section: Person Re-identificationmentioning
confidence: 99%