2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 2022
DOI: 10.1109/cvpr52688.2022.01388
|View full text |Cite
|
Sign up to set email alerts
|

Cloth-Changing Person Re-identification from A Single Image with Gait Prediction and Regularization

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
14
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
4
2
1

Relationship

0
7

Authors

Journals

citations
Cited by 50 publications
(16 citation statements)
references
References 50 publications
0
14
0
Order By: Relevance
“…Since there are few works to explore CCVReID task, we compare our method with four kinds of methods for a comprehensive evaluation: 1) video-based person Re-ID methods that do not involve clothes-changing, including AP3D [17], TCLNet [22], and SINet [1], 2) gait recognition methods, including GaitSet [3], GaitPart [13], and GaitGL [29], 3) image-based clothes-changing methods, including ReIDCaps [23], Pixel sampling [37], and GI-ReID [24], 4) video-based clothes-changing method CAL [16]. Note that image-based methods receive a single frame as input, we report the results under two different settings of such methods, i.e., randomly select a frame for feature extraction (denoted with suffix "-R"), and extract the features of all frames and take the average as the final feature (denoted with suffix "-A").…”
Section: Comparison With State-of-the-art Methodsmentioning
confidence: 99%
See 2 more Smart Citations
“…Since there are few works to explore CCVReID task, we compare our method with four kinds of methods for a comprehensive evaluation: 1) video-based person Re-ID methods that do not involve clothes-changing, including AP3D [17], TCLNet [22], and SINet [1], 2) gait recognition methods, including GaitSet [3], GaitPart [13], and GaitGL [29], 3) image-based clothes-changing methods, including ReIDCaps [23], Pixel sampling [37], and GI-ReID [24], 4) video-based clothes-changing method CAL [16]. Note that image-based methods receive a single frame as input, we report the results under two different settings of such methods, i.e., randomly select a frame for feature extraction (denoted with suffix "-R"), and extract the features of all frames and take the average as the final feature (denoted with suffix "-A").…”
Section: Comparison With State-of-the-art Methodsmentioning
confidence: 99%
“…[28] proposed a Clothing Agnostic Shape Extraction Network (CASE-Net) to shape-based feature representation via adversarial learning and feature disentanglement. Besides, other works attempted to leverage contour sketch [44], silhouettes [21,24], face [40], skeletons [34], 3D shape [6], or radio signals [14] to capture clothes-irrelevant features. Despite the achievement of image-based methods, they are susceptible to the quality of person images, i.e., they are less tolerant to noise due to the limited information contained in a single frame.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…Person re-identification aims is to search for a targeted person via surveillance videos at different locations and times. Due to factors like the limitations of technology, most of the current research on person re-identification assume that the target's clothes are unchanged (Huang et al, 2018;Jin et al, 2022;Li et al, 2018). Thus, it uses the color, texture, and other features of the clothes as discriminant conditions.…”
Section: Introductionmentioning
confidence: 99%
“…In recent years, plenty of efforts [3,19,21,36,47,50,53] have been made to handle the cloth-changing issue by learning discriminative cloth-agnostic identity representations. A small proportion of methods [3,47,50] attempt to decouple cloth-agnostic features directly from RGB images without multi-modal auxiliary information, which inevitably leads to the loss of crucial information in global features and results in a heavy reliance on the domain.…”
Section: Introductionmentioning
confidence: 99%