2020
DOI: 10.1109/tmm.2020.2969782
|View full text |Cite
|
Sign up to set email alerts
|

Illumination-Adaptive Person Re-Identification

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
18
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
6
2

Relationship

0
8

Authors

Journals

citations
Cited by 79 publications
(21 citation statements)
references
References 43 publications
0
18
0
Order By: Relevance
“…Many researchers have focused on introducing novel architectures to address the problem [9,[14][15][16][17][18][19][20][21]. In this section, we briefly discuss some recent deep architectures.…”
Section: Related Workmentioning
confidence: 99%
“…Many researchers have focused on introducing novel architectures to address the problem [9,[14][15][16][17][18][19][20][21]. In this section, we briefly discuss some recent deep architectures.…”
Section: Related Workmentioning
confidence: 99%
“…Appearance based —Identifying people from their silhouettes can be approached as a re-identification (ReID) problem [ 18 , 19 , 20 ]. The vast majority of the literature on person ReID makes use of RGB images, as detailed in the review from Bedagkar-Gala et al [ 21 ] and the more recent deep-learning review from Wu et al [ 22 ].…”
Section: Related Workmentioning
confidence: 99%
“…Inspired by the tremendous success of deep learning, many methods [4], [5], [6] have been introduced to learn deep expressive representations for person ReID and achieved stateof-the-art performance. Typically, most of these methods [7], [8], [4], [9], [5], [10], [11], [12], [6], [13], [14], [15], [16], [17], [18], [19], [20] employ a triplet loss [7], [5], [13] or its combination of a classification loss [10], [11], [12] as the driving force to extract relevant features. Under this generic framework, several approaches have been developed to learn semantically-rich and/or local features, such as the global feature-based approach [14], [15], data augmentation-based approach [6], [13] and striping approach [21], [10].…”
Section: Introductionmentioning
confidence: 99%