2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition 2018
DOI: 10.1109/cvpr.2018.00110
|View full text |Cite
|
Sign up to set email alerts
|

Image-Image Domain Adaptation with Preserved Self-Similarity and Domain-Dissimilarity for Person Re-identification

Abstract: Person re-identification (re-ID) models trained on one domain often fail to generalize well to another. In our attempt, we present a "learning via translation" framework. In the baseline, we translate the labeled images from source to target domain in an unsupervised manner. We then train re-ID models with the translated images by supervised methods. Yet, being an essential part of this framework, unsupervised image-image translation suffers from the information loss of source-domain labels during translation.… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

3
738
0

Year Published

2018
2018
2022
2022

Publication Types

Select...
3
2
1

Relationship

0
6

Authors

Journals

citations
Cited by 862 publications
(741 citation statements)
references
References 47 publications
3
738
0
Order By: Relevance
“…To better exploit and adapt visual information across data domains, methods based on domain adaptation [8,24] have been utilized [12,14,29,35,49,61]. However, since the identities, viewpoints, body poses and background clutter can be very different across datasets, plus no label supervision is available at the target domain, the performance gains might be limited.…”
Section: Related Workmentioning
confidence: 99%
See 4 more Smart Citations
“…To better exploit and adapt visual information across data domains, methods based on domain adaptation [8,24] have been utilized [12,14,29,35,49,61]. However, since the identities, viewpoints, body poses and background clutter can be very different across datasets, plus no label supervision is available at the target domain, the performance gains might be limited.…”
Section: Related Workmentioning
confidence: 99%
“…In Table 1, we compare our proposed model with the use of Bag-of-Words (BoW) [58] for matching (i.e., no transfer), four unsupervised re-ID approaches, including UMDL [42], PUL [15], CAMEL [54] and TAUDL [29], and seven cross-dataset re-ID methods, including PTGAN [51], SPGAN [12], TJ-AIDL [49], MMFA [35], HHL [61], CFSM [3] and ARN [32]. From this table, we see that our model achieved very promising [12] and HHL [61], we note that our model is able to generate cross-domain images conditioned on various poses rather than few camera styles. Compared to MMFA [35], our model further disentangles the pose information and learns a pose invariant cross-domain latent space.…”
Section: Quantitative Comparisonsmentioning
confidence: 99%
See 3 more Smart Citations