2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 2019
DOI: 10.1109/cvpr.2019.00071
|View full text |Cite
|
Sign up to set email alerts
|

Learning to Reduce Dual-Level Discrepancy for Infrared-Visible Person Re-Identification

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
220
0

Year Published

2019
2019
2021
2021

Publication Types

Select...
5
2
1

Relationship

0
8

Authors

Journals

citations
Cited by 344 publications
(242 citation statements)
references
References 19 publications
0
220
0
Order By: Relevance
“…It mainly has two advantages: 1) end-to-end feature learning directly from the data without extra metric learning steps, 2) it simultaneously handles the cross-modality and intra-modality variations by the dual-constrained top-ranking loss to ensure the discriminability of the learnt person features. Learning to Reduce Dual-level Discrepancy for Infrared-Visible Person Re-identification [26]. Wang et al proposed a novel dual-level discrepancy reduction learning (D 2 RL) scheme to separately handle the two discrepancies: modality discrepancy and appearance discrepancy.…”
Section: Cross-modality Person Re-identification With Generativementioning
confidence: 99%
“…It mainly has two advantages: 1) end-to-end feature learning directly from the data without extra metric learning steps, 2) it simultaneously handles the cross-modality and intra-modality variations by the dual-constrained top-ranking loss to ensure the discriminability of the learnt person features. Learning to Reduce Dual-level Discrepancy for Infrared-Visible Person Re-identification [26]. Wang et al proposed a novel dual-level discrepancy reduction learning (D 2 RL) scheme to separately handle the two discrepancies: modality discrepancy and appearance discrepancy.…”
Section: Cross-modality Person Re-identification With Generativementioning
confidence: 99%
“…Wang et al [43] performed a comprehensive survey of heterogeneous person re-identification. Specifically, several types of cross-modality person ReID have been studied, including Image-to-Text cross-modality retrieval [24], Photo-to-Sketch cross-modality retrieval [34], and popular Infrared-to-Visible cross-modality retrieval [8,15,21,41,44,50,54,55,58]. Li et al [24] proposed that searching a person with free-form natural language descriptions can be widely applied in video surveillance and build a dataset for image-text cross-modality retrieval.…”
Section: Cross-modality Retrievalmentioning
confidence: 99%
“…features in a shared space. Wang et al [44] proposed D 2 RL that consists of two GANs to conduct mutual translations of infrared and visible images. On the other hand, AlignGAN [41] accomplished pixel and feature alignment among the visible images, infrared images, and generated fake infrared images within a unified GAN framework.…”
Section: Cross-modality Retrievalmentioning
confidence: 99%
See 1 more Smart Citation
“…Some studies have used deep learning for cross-modality face recognition between RGB, NIR, and thermal. These provide an elegant solution to the complex problem by utilizing multiple networks [5][6][7][8]. However, there are very few deep-learning studies that focus solely on NIR face recognition.…”
Section: Introductionmentioning
confidence: 99%