2023
DOI: 10.1016/j.inffus.2023.101901
|View full text |Cite
|
Sign up to set email alerts
|

Cross-directional consistency network with adaptive layer normalization for multi-spectral vehicle re-identification and a high-quality benchmark

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
9
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
5
1
1

Relationship

0
7

Authors

Journals

citations
Cited by 10 publications
(9 citation statements)
references
References 61 publications
0
9
0
Order By: Relevance
“…Thanks to the strong complementarity between visible light and infrared, these two modalities are increasingly being used together in a variety of scenarios, making cross-modal Vehicle Re-Identification a prominent research area. Several studies [21][22][23][24][25] have been conducted to explore this field. Pan et al 21 proposes a hybrid vision transformer (H-ViT) to learn inter and intra-modal information to reduce feature deviations caused by modal variations based on the modal-specific controller (MC) and the modal information embedding (MIE) structure.…”
Section: Cross-modal Vehicle Re-identificationmentioning
confidence: 99%
See 2 more Smart Citations
“…Thanks to the strong complementarity between visible light and infrared, these two modalities are increasingly being used together in a variety of scenarios, making cross-modal Vehicle Re-Identification a prominent research area. Several studies [21][22][23][24][25] have been conducted to explore this field. Pan et al 21 proposes a hybrid vision transformer (H-ViT) to learn inter and intra-modal information to reduce feature deviations caused by modal variations based on the modal-specific controller (MC) and the modal information embedding (MIE) structure.…”
Section: Cross-modal Vehicle Re-identificationmentioning
confidence: 99%
“…Adversarial learning is also employed to bridge the modality gap at the image level. Zheng et al 23 proposes to deal with this task in complex lighting environments and diverse scenes by utilizing multi-spectral sources, such as visible and infrared information. Li et al 24 proposes a multi-spectral vehicle Re-ID benchmark named RGBN300, including RGB and NIR vehicle images of 300 identities from 8 camera views.…”
Section: Cross-modal Vehicle Re-identificationmentioning
confidence: 99%
See 1 more Smart Citation
“…As a result, although non-visible images have great potential to boost vehicle ReID performance in low illumination environments, there is an open question in multi-modal ReID in practice: how to effectively fuse the complementary information from multi-modal data? Existing multi-modal vehicle Re-ID [17][18][19][20] most focus on learning modal robust features. For example, Wang et al [20] designed a cross-modal interacting module and a relation-based embedding module to exchange useful information from multi-modal features so as to enhance features' richness.…”
Section: Introductionmentioning
confidence: 99%
“…Both cross-modal interacting and relation-based embedding modules are convolutional neural network (CNN) branches. Zheng et al [19] proposed a cross-directional consistency network to mitigate cross-modal discrepancies and adjust individual feature distributions for learning modal robust features. Li et al [17] proposed a heterogeneity collaboration aware multi-stream convolutional neural network to constrain scores of different instances of the same identity to be coherent.…”
Section: Introductionmentioning
confidence: 99%