2017
DOI: 10.1145/3159171
|View full text |Cite
|
Sign up to set email alerts
|

A Discriminatively Learned CNN Embedding for Person Reidentification

Abstract: Abstract-In this paper, we revisit two popular convolutional neural networks (CNN) in person re-identification (re-ID), i.e.,verification and identification models. The two models have their respective advantages and limitations due to different loss functions. In this paper, we shed light on how to combine the two models to learn more discriminative pedestrian descriptors. Specifically, we propose a siamese network that simultaneously computes the identification loss and verification loss. Given a pair of tra… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

5
367
1
1

Year Published

2018
2018
2024
2024

Publication Types

Select...
5
1
1

Relationship

0
7

Authors

Journals

citations
Cited by 606 publications
(375 citation statements)
references
References 61 publications
5
367
1
1
Order By: Relevance
“…5. The proposed DFBP largely improves the performance compared with the previous reported results, such as 10.51% mAP, 13.72% rank-1 accuracy in [19] and 48.42% mAP, 54.42% rank-1 accuracy in [16]. These results indicate that the proposed DFBP could also learn the discriminative features under different viewpoints.…”
Section: Market1501 Databasesupporting
confidence: 52%
See 3 more Smart Citations
“…5. The proposed DFBP largely improves the performance compared with the previous reported results, such as 10.51% mAP, 13.72% rank-1 accuracy in [19] and 48.42% mAP, 54.42% rank-1 accuracy in [16]. These results indicate that the proposed DFBP could also learn the discriminative features under different viewpoints.…”
Section: Market1501 Databasesupporting
confidence: 52%
“…2a. In order to connect the two networks, we employ a non-parametric layer called square layer [16] to compare the features f 1 , f 2 . The input of square layer is f 1 and f 2 , and the output is formulated as:…”
Section: Part-based Feature Learning Modelmentioning
confidence: 99%
See 2 more Smart Citations
“…Both images with suppressed BGs (our generated images) and images with full BGs are respectively fed into the two individual streams of DA-2S. Unlike previous 2-stream methods (e.g., [2,5,40]), we propose Inter-Stream Densely Connection (ISDC) modules as new components used between the two streams of DA-2S. With ISDCs, more gradients produced by the final objective function can participate to strengthen the relationship between signals coming from two different streams in the back-propagation.…”
Section: Introductionmentioning
confidence: 99%