2021
DOI: 10.1007/s00521-021-06559-6
|View full text |Cite
|
Sign up to set email alerts
|

Dual attention granularity network for vehicle re-identification

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
1

Relationship

1
5

Authors

Journals

citations
Cited by 8 publications
(3 citation statements)
references
References 39 publications
0
3
0
Order By: Relevance
“…TCPM [ 25 ] divides the final feature map from the horizontal and vertical directions and uses an external memory module to store partial features to model the global feature vector. Dual+SA [ 41 ] uses self-attention to generate attention maps about the vehicle model and vehicle ID and inputs the attention map to the part localization module to obtain the fine region features of ROIs. Relying only on visual information, our proposed model MRF-SAPL achieves 81.5% mAP, 94.7% Top-1 accuracy, and 98.7% Top-5 accuracy.…”
Section: Methodsmentioning
confidence: 99%
“…TCPM [ 25 ] divides the final feature map from the horizontal and vertical directions and uses an external memory module to store partial features to model the global feature vector. Dual+SA [ 41 ] uses self-attention to generate attention maps about the vehicle model and vehicle ID and inputs the attention map to the part localization module to obtain the fine region features of ROIs. Relying only on visual information, our proposed model MRF-SAPL achieves 81.5% mAP, 94.7% Top-1 accuracy, and 98.7% Top-5 accuracy.…”
Section: Methodsmentioning
confidence: 99%
“…A dual attention re-identification network is proposed by authors in [63] that selectively scores the vehicle parts with higher attention scores. The framework extracts the vehicle features by using a dual branch CNN network.…”
Section: Related Workmentioning
confidence: 99%
“…( 1), the input of the attention layer is the output of Conv1D layer, which has the shape of R F ×T , F stands for the number of filters, also called channels. In Conv1D layer, all channels have the same importance, and this leads to the loss of important information [27], [28]. Therefore, the attention mechanism is designed for the selection of important channels.…”
Section: B Cluster-based Federated Learningmentioning
confidence: 99%