2020
DOI: 10.1016/j.neucom.2020.01.089
|View full text |Cite
|
Sign up to set email alerts
|

Enhancing the discriminative feature learning for visible-thermal cross-modality person re-identification

Abstract: Existing person re-identification has achieved great progress in the visible domain, capturing all the person images with visible cameras. However, in a 24-hour intelligent surveillance system, the visible cameras may be noneffective at night. In this situation, thermal cameras are the best supplemental components, which capture images without depending on visible light. Therefore, in this paper, we investigate the visible-thermal cross-modality person re-identification (VT Re-ID) problem. In VT Re-ID, there a… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
28
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
5
1
1

Relationship

0
7

Authors

Journals

citations
Cited by 98 publications
(40 citation statements)
references
References 29 publications
0
28
0
Order By: Relevance
“…In this subsection, we compare our proposed method with several cross-modality person ReID methods that include the following categories: 1) With different structures and loss functions, Two-Stream, One-Stream, Zero-Padding [39], HSME, D-HSME [11], BDTR, SDL [19], DGD+MSR [7], EDFL [23], HPILN [50], AGW [44], cm-SSFT [25], and TSLFN+HC [54] learned modality-invariant feature representation; 2) With the ideas of GAN, cmGAN [4],…”
Section: Comparison With State-of-the-art Methodsmentioning
confidence: 99%
“…In this subsection, we compare our proposed method with several cross-modality person ReID methods that include the following categories: 1) With different structures and loss functions, Two-Stream, One-Stream, Zero-Padding [39], HSME, D-HSME [11], BDTR, SDL [19], DGD+MSR [7], EDFL [23], HPILN [50], AGW [44], cm-SSFT [25], and TSLFN+HC [54] learned modality-invariant feature representation; 2) With the ideas of GAN, cmGAN [4],…”
Section: Comparison With State-of-the-art Methodsmentioning
confidence: 99%
“…Previous works [12,44] provided results for most of the considered methods. The methods compared in this paper include manual feature design methods (HOG [45] and LOMO [3]) and deep learning methods [9,15,[24][25][26][27]29,[33][34][35][46][47][48][49][50]. The results of the compared methods were obtained from the original text.…”
Section: Comparison With State-of-the-art Methodsmentioning
confidence: 99%
“…In this approach, the mode is irrelevant. Liu et al [26] use two-way convolution to extract features. In the process of extracting features, skip connections are used to fuse the middle layers of the CNN model and enhance the robustness and non-descriptiveness of the extracted features.…”
Section: Rgb-ir Re-id Based On Cnn Networkmentioning
confidence: 99%
See 2 more Smart Citations