2017
DOI: 10.1109/tip.2017.2675201
|View full text |Cite
|
Sign up to set email alerts
|

Robust Depth-Based Person Re-Identification

Abstract: Abstract-Person re-identification (re-id) aims to match people across non-overlapping camera views. So far the RGB-based appearance is widely used in most existing works. However, when people appeared in extreme illumination or changed clothes, the RGB appearance-based re-id methods tended to fail. To overcome this problem, we propose to exploit depth information to provide more invariant body shape and skeleton information regardless of illumination and color change. More specifically, we exploit depth voxel … Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
71
0

Year Published

2017
2017
2021
2021

Publication Types

Select...
5
3
1

Relationship

0
9

Authors

Journals

citations
Cited by 137 publications
(74 citation statements)
references
References 78 publications
(129 reference statements)
0
71
0
Order By: Relevance
“…Nguyen et al (Nguyen et al 2017) firstly applied person re-identification models to visiblethermal images. Wu et al (Wu, Zheng, and Lai 2017) designed a depth shape descriptor which is robust to rotation and noises. Meanwhile, Lin et al (Lin et al 2017) combined attribute information with image information for visible images.…”
Section: Related Workmentioning
confidence: 99%
“…Nguyen et al (Nguyen et al 2017) firstly applied person re-identification models to visiblethermal images. Wu et al (Wu, Zheng, and Lai 2017) designed a depth shape descriptor which is robust to rotation and noises. Meanwhile, Lin et al (Lin et al 2017) combined attribute information with image information for visible images.…”
Section: Related Workmentioning
confidence: 99%
“…Multimodal fusion of RGB and depth information is rarely considered in person re-id [18,21,27]. Liciotti et al [18] propose a combination of hand-crafted RGB and depth features to capture both color, texture and anthropometric information.…”
Section: Related Workmentioning
confidence: 99%
“…Liciotti et al [18] propose a combination of hand-crafted RGB and depth features to capture both color, texture and anthropometric information. RGB-D based hand-crafted features are also proposed by Wu et al [27] who extract a rotation invariant Eigen-depth feature and fuse it with low-level color and texture features [17]. Only two previous proposals fuse RGB and depth features using a CNN [10,13].…”
Section: Related Workmentioning
confidence: 99%
“…There are over in excess of 150 methodologies for face detection. These methodologies are characterized into four distinct classifications, they are Knowledge-based, Feature invariant, Template matching, and Appearance-based [2]. In the accompanying, a short audit of these four classifications is given.…”
Section: Introductionmentioning
confidence: 99%