2023
DOI: 10.1109/tmm.2023.3237155
|View full text |Cite
|
Sign up to set email alerts
|

Cross-Modality Transformer With Modality Mining for Visible-Infrared Person Re-Identification

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
0
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
6

Relationship

0
6

Authors

Journals

citations
Cited by 17 publications
(2 citation statements)
references
References 53 publications
0
0
0
Order By: Relevance
“…(3) feature alignment-based methods: cm-SSFT [20], NFS [21], CMNAS [22], MPANet [23]; (4) transformer-based methods: SPOT [25], CMT [26],CMTR [27], DFLN-ViT [28], PMT [29].…”
Section: Comparison With State-of-the-art Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…(3) feature alignment-based methods: cm-SSFT [20], NFS [21], CMNAS [22], MPANet [23]; (4) transformer-based methods: SPOT [25], CMT [26],CMTR [27], DFLN-ViT [28], PMT [29].…”
Section: Comparison With State-of-the-art Methodsmentioning
confidence: 99%
“…Jiang et al [26] proposed a Cross-Modality Transformer (CMT) to achieve query-adaptive feature alignment through an instance-level alignment module. In 2023, Liang et al [27] designed a Cross-Modality Transformer-based network (CMTR) able to can generate identity-discriminative features and learn the information of each modality. Zhao et al [28] proposed a Discriminative Feature Learning Network (DFLN) containing a spatial representation perceive module to extract long-term dependencies between different positions.…”
Section: Visible-infrared Cross-modality Person Re-identificationmentioning
confidence: 99%