2022
DOI: 10.1109/tits.2022.3219593
|View full text |Cite
|
Sign up to set email alerts
|

MART: Mask-Aware Reasoning Transformer for Vehicle Re-Identification

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
1
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
6

Relationship

0
6

Authors

Journals

citations
Cited by 7 publications
(2 citation statements)
references
References 61 publications
0
1
0
Order By: Relevance
“…We compared our method with some state-of-the-art (SOTA) approaches from the last three years, categorized into three groups: (1) Global feature-based (GF) methods, such as SN [13], VARID [12], VAT [18], and MsKAT [32], mainly concentrate on extracting whole representation for vehicle images. (2) Local feature-based (LF) methods, including DPGM [14], LG-CoT [36], HPGN [37], DFR [38], DSN [39], SFMNet [40], GiT [31], SOFCT [22], MART [41] integrate local features with the global feature to learn reliable vehicle representations. (3) Spatial-temporal (ST) methods, such as DPGM-ST [14] and DFR-ST [38], exploit extra timestamp and camera location information to enhance vehicle re-identification using visual features.…”
Section: Comparisons With State-of-the-art Methodsmentioning
confidence: 99%
“…We compared our method with some state-of-the-art (SOTA) approaches from the last three years, categorized into three groups: (1) Global feature-based (GF) methods, such as SN [13], VARID [12], VAT [18], and MsKAT [32], mainly concentrate on extracting whole representation for vehicle images. (2) Local feature-based (LF) methods, including DPGM [14], LG-CoT [36], HPGN [37], DFR [38], DSN [39], SFMNet [40], GiT [31], SOFCT [22], MART [41] integrate local features with the global feature to learn reliable vehicle representations. (3) Spatial-temporal (ST) methods, such as DPGM-ST [14] and DFR-ST [38], exploit extra timestamp and camera location information to enhance vehicle re-identification using visual features.…”
Section: Comparisons With State-of-the-art Methodsmentioning
confidence: 99%
“…The second approach is based on visual features. The traditional feature descriptors such as SURF and deep learning features can be used [9] [10]. With this approach the candidate VBMs share visual features of their external images, and VPM will compare locally measured and shared visual features and determine if the CAV is matched.…”
Section: Introductionmentioning
confidence: 99%