2019 IEEE/CVF International Conference on Computer Vision (ICCV) 2019
DOI: 10.1109/iccv.2019.00064
|View full text |Cite
|
Sign up to set email alerts
|

Robust Person Re-Identification by Modelling Feature Uncertainty

Abstract: We aim to learn deep person re-identification (ReID) models that are robust against noisy training data. Two types of noise are prevalent in practice: (1) label noise caused by human annotator errors and (2) data outliers caused by person detector errors or occlusion. Both types of noise pose serious problems for training ReID models, yet have been largely ignored so far. In this paper, we propose a novel deep network termed DistributionNet for robust ReID. Instead of representing each person image as a featur… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
63
0

Year Published

2020
2020
2021
2021

Publication Types

Select...
5
2

Relationship

0
7

Authors

Journals

citations
Cited by 116 publications
(65 citation statements)
references
References 31 publications
0
63
0
Order By: Relevance
“…Variances of this loss, also referred to as triplet loss, are adopted by the methods in [4,11,22,24,37]. Some works proposed novel loss functions [8,38]. Finally, combinations of losses are incorporated in [2,21,38,40].…”
Section: Supervised Methodsmentioning
confidence: 99%
See 2 more Smart Citations
“…Variances of this loss, also referred to as triplet loss, are adopted by the methods in [4,11,22,24,37]. Some works proposed novel loss functions [8,38]. Finally, combinations of losses are incorporated in [2,21,38,40].…”
Section: Supervised Methodsmentioning
confidence: 99%
“…Some works proposed novel loss functions [8,38]. Finally, combinations of losses are incorporated in [2,21,38,40].…”
Section: Supervised Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…Since this method relies on clustering to assign the same pseudolabel to samples belonging to the same cluster and then optimizes the model with the pseudolabel as the supervision information [19], the credibility of the pseudolabel determines the performance of the model [20]. If we have a highly credible pseudolabel, the model can be adapted to the target domain data [21][22][23]. However, in the original dataset, there may be some samples with label noise, and the noisy samples interfere with the model training process because of their incorrect information in the feature space.…”
Section: Introductionmentioning
confidence: 99%
“…These challenges make AAA more complicated. As pointed out in [17] modeling data uncertainty is beneficial in noisy data training [18] [19] [20] [21]. However, recent data uncertainty learning methods treat the modeled multivariate noise independent among components for simplicity, which is not an appropriate assumption for intertwined variables like photographic style.…”
Section: Introductionmentioning
confidence: 99%