2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 2021
DOI: 10.1109/cvpr46437.2021.00805
|View full text |Cite
|
Sign up to set email alerts
|

Learning 3D Shape Feature for Texture-insensitive Person Re-identification

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
23
0

Year Published

2022
2022
2022
2022

Publication Types

Select...
4
1
1

Relationship

0
6

Authors

Journals

citations
Cited by 56 publications
(23 citation statements)
references
References 41 publications
0
23
0
Order By: Relevance
“…HACNN [31], PCB [40], and IANet [20]) and six clothes-changing re-id methods (i.e. SPT+ASE [49], GI-ReID [28], CESD [35], RCSANet [25], 3DSL [6], and FSAM [18]) on LTCC and PRCC in Tab. 2.…”
Section: Comparison With State-of-the-art Methodsmentioning
confidence: 99%
See 4 more Smart Citations
“…HACNN [31], PCB [40], and IANet [20]) and six clothes-changing re-id methods (i.e. SPT+ASE [49], GI-ReID [28], CESD [35], RCSANet [25], 3DSL [6], and FSAM [18]) on LTCC and PRCC in Tab. 2.…”
Section: Comparison With State-of-the-art Methodsmentioning
confidence: 99%
“…To this end, [52,55] attempts to use disentangled representation learning to decouple appearance and structural information from RGB images, and considers structural information as clothes-irrelevant features. In contrast, other researchers attempt to use multi-modality information (e.g., skeletons [35], silhouettes [18,28], radio signals [7], contour sketches [49], or 3D shape [6]) to model body shape and extract clothesirrelevant features. However, the training of disentangled representation learning is time-consuming, and multimodality-based methods need additional models or equipment to extract multi-modality information.…”
Section: Related Workmentioning
confidence: 99%
See 3 more Smart Citations