2016
DOI: 10.48550/arxiv.1611.05244
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Deep Transfer Learning for Person Re-identification

Abstract: Person re-identification (Re-ID) poses a unique challenge to deep learning: how to learn a deep model with millions of parameters on a small training set of few or no labels. In this paper, a number of deep transfer learning models are proposed to address the data sparsity problem. First, a deep network architecture is designed which differs from existing deep Re-ID models in that (a) it is more suitable for transferring representations learned from large image classification datasets, and (b) classification l… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

2
95
0

Year Published

2018
2018
2022
2022

Publication Types

Select...
4
4
1

Relationship

1
8

Authors

Journals

citations
Cited by 99 publications
(100 citation statements)
references
References 54 publications
2
95
0
Order By: Relevance
“…In order to address these issues, most existing person Re-ID methods are designed on supervised learning [45,46,28,14,47,43,44,19] which aims to learn a dis-criminative representation from labeled data. Recently, benefited from the success of deep learning [20,6,18], the supervised learning based methods have obtained significant performance improvement [35,7].…”
Section: Market-1501mentioning
confidence: 99%
“…In order to address these issues, most existing person Re-ID methods are designed on supervised learning [45,46,28,14,47,43,44,19] which aims to learn a dis-criminative representation from labeled data. Recently, benefited from the success of deep learning [20,6,18], the supervised learning based methods have obtained significant performance improvement [35,7].…”
Section: Market-1501mentioning
confidence: 99%
“…Second, we intend to study how to further exploit discriminative features from the last convolutional layers instead of using global average pooling. [38] 44.4 63.9 72.2 20.8 WARCA [13] 45.2 68.1 76.0 -KLFDA [14] 46.5 71.1 79.9 -SOMAnet [3] 73.9 --47.9 SVDNet [25] 82.3 92.3 95.2 62.1 PAN [40] 82.8 --63.4 Transfer [11] 83.7 --65.5 Triplet Loss [12] 84.9 94.2 -69.1 DML [34] 87.7 --68.8 MultiRegion [26] 66.4 85.0 90.2 41.2 HydraPlus [21] 76.9 91.3 94.5 -PAR [36] 81.0 92.0 94.7 -MultiLoss [16] 83.9 --64.4 PDC* [24] 84.4 92.7 94.9 63.4 PartLoss [31] 88.…”
Section: Discussionmentioning
confidence: 99%
“…However, the Proposed model shows the best accuracy among the three. [22] 76.90 ---HP-Net [31] 76.90 ---PIE (poseBox) [5] 79.33 55.95 --DLPAR [32] 81.00 63.40 --SSM [33] 82.21 68.80 88.2 76.2 PDC [34] 84.14 63.41 --SVDNet (RE) [29] 87.08 71.31 --DML [35] 87.70 68.80 --DeepTransfer [36] 83.70 65.50 89.60 73.80 JLML [37] 85.10 65.50 89.70 73.80 MLFN [38] 90.00 74.30 --HA-CNN [39] 91.20 75.70 --PCB [14] 93.80 81.60 --US-GAN [11] 83.97 66.07 88.42 76.10 Liu et al [24] 87.65 68.92 --PN-GAN [26] 89 3) Robustness: Our pt-GAN model shows good robustness towards occlusion, illumination and scale. As seen in the person 1 of Figure 5, the occlusion (railings) present in the input image is not propagated in any of the generated images.…”
Section: B Pose Clusteringmentioning
confidence: 99%