2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 2021
DOI: 10.1109/cvpr46437.2021.01272
|View full text |Cite
|
Sign up to set email alerts
|

Unsupervised Multi-Source Domain Adaptation for Person Re-Identification

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
22
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
4
3
2

Relationship

0
9

Authors

Journals

citations
Cited by 66 publications
(22 citation statements)
references
References 26 publications
0
22
0
Order By: Relevance
“…(3) Unsupervised domain adaptation methods which assume a labelled source domain and an unlabelled target domain with different data distribution or image style. Recent works (Zheng et al 2021;Wu, Zheng, and Lai 2019;Zhao et al 2020b;Zhang et al 2021;Bai et al 2021) try to mitigate the domain gap between them for domain adaptation. (4) Pureunsupervised methods which suppose labelled source data is not available.…”
Section: Related Work Person Re-identificationmentioning
confidence: 99%
“…(3) Unsupervised domain adaptation methods which assume a labelled source domain and an unlabelled target domain with different data distribution or image style. Recent works (Zheng et al 2021;Wu, Zheng, and Lai 2019;Zhao et al 2020b;Zhang et al 2021;Bai et al 2021) try to mitigate the domain gap between them for domain adaptation. (4) Pureunsupervised methods which suppose labelled source data is not available.…”
Section: Related Work Person Re-identificationmentioning
confidence: 99%
“…They have introduced novel dissimilarity based discrepancy loss that made the source and target distribution similar to make the results more efficient. Multi-source domain training to make the model generalize on multiple unseen domain was a way adopted in [251]. For this purpose, there comes a problem of domain gaps.…”
Section: Cnn-based Approachesmentioning
confidence: 99%
“…We compare LF 2 framework with state-of-the-art methods including: GAN transferring based methods (SPGAN+LMP [23], PDA-Net [24]), joint learning based methods (ECN [25], MMCL [27], JVTC+ [26], IDM [28]) and fine-tuning based methods (SSG [7], ADTC [9], AD-Cluster [8], MMT [10], MEB-Net [11], Dual-Refinement [15], UNRN [14], GLT [12], HCD [13], P 2 LR [32], RDSBN+MDIF [33]). The comparison results are shown in Table I.…”
Section: B Comparison With the State-of-the-artmentioning
confidence: 99%
“…Based on joint learning method, IDM [28] uses domain specific batch normalization and achieves best Rank-1 accuracy. RDSBN+MDIF [33] and P 2 LR [32] methods also construct teacher-student networks as baseline. However, these fine-tuning methods only focus on either pseudo labels refinery or domain-level information fusion.…”
Section: B Comparison With the State-of-the-artmentioning
confidence: 99%