2021
DOI: 10.1109/tip.2020.3045261
|View full text |Cite
|
Sign up to set email alerts
|

Bi-Directional Exponential Angular Triplet Loss for RGB-Infrared Person Re-Identification

Abstract: RGB-Infrared person re-identification (RGB-IR Re-ID) is a cross-modality matching problem with promising applications in the dark environment. Most existing works use Euclidean metric based constraints to resolve the discrepancy between features of different modalities. However, these methods are incapable of learning angularly discriminative feature embedding because Euclidean distance cannot measure the included angle between embedding vectors effectively. As an angularly discriminative feature space is impo… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
17
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
5
2
2

Relationship

0
9

Authors

Journals

citations
Cited by 53 publications
(20 citation statements)
references
References 63 publications
(81 reference statements)
0
17
0
Order By: Relevance
“…We first compare the proposed MMN method with several other state-of-the-art methods to demonstrate the superiority of MMN method. The competing methods include the methods based on feature extraction (including Zero-Padding [40], HCML [44], MHM [41], BDTR [46], MAC [43], cmGAN [3], MSR [7], HSME [12], SNR [16], expAT [42], CMM [22], CMSP [39], SSFT [24], DDAA [45], CoAL [38], NFS [1]) and the methods based on image generation (including D 2 RL [36], JSIA-ReID [35], AlignGAN [34], Hi-CMD [2], DG-VAE [29], X-Modality [20]). The results on the RegDB and SYSU-MM01 datasets are reported in Table 1.…”
Section: Comparison With State-of-the-art Methodsmentioning
confidence: 99%
“…We first compare the proposed MMN method with several other state-of-the-art methods to demonstrate the superiority of MMN method. The competing methods include the methods based on feature extraction (including Zero-Padding [40], HCML [44], MHM [41], BDTR [46], MAC [43], cmGAN [3], MSR [7], HSME [12], SNR [16], expAT [42], CMM [22], CMSP [39], SSFT [24], DDAA [45], CoAL [38], NFS [1]) and the methods based on image generation (including D 2 RL [36], JSIA-ReID [35], AlignGAN [34], Hi-CMD [2], DG-VAE [29], X-Modality [20]). The results on the RegDB and SYSU-MM01 datasets are reported in Table 1.…”
Section: Comparison With State-of-the-art Methodsmentioning
confidence: 99%
“…Zhang et al mitigated the modality discrepancy by mapping the heterogeneous representations into a common space [42]. To learn an angularly separable common feature space, Ye et al [1] constrained the angles between feature vectors. Cai et al [43] proposed a dual-modality hard mining triplet-center loss (DTCL) which can reduce computational cost and mine hard triplet samples.…”
Section: Metric Learningmentioning
confidence: 99%
“…Person re-identification (ReID) is a fundamental building block in various tasks of computer vision, such as intelligent surveillance, video analysis [1], and criminal investigation [2]. With the advancement of intelligent monitoring and the enormous expansion of video data in recent years, conventional human power has been challenging and insufficient to deal with intricate surveillance scenarios.…”
Section: Introductionmentioning
confidence: 99%
“…Influence of CMCC Loss: As shown in Table 3, the model with CMCC loss (B+CMCC) achieves a rank-1 accuracy of 56.63% and an mAP of 54.93%, which are higher than the baseline (B) by 7.47% and 7.95%, respectively. Besides, we implement expAT loss [40] and triplet center loss (TC) [12] with the baseline respectively. Compared with B+expAT and B+TC, the addition of CMCC loss brings a marked performance boost to baseline.…”
Section: Ablation Studymentioning
confidence: 99%