2020
DOI: 10.1109/tcsvt.2020.2977427
|View full text |Cite
|
Sign up to set email alerts
|

Three-Dimension Transmissible Attention Network for Person Re-Identification

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
6
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
5

Relationship

1
4

Authors

Journals

citations
Cited by 16 publications
(7 citation statements)
references
References 39 publications
0
6
0
Order By: Relevance
“…Backbone Market-1501 Rank-1 mAP DaRe (CVPR18) [33] ResNet50 86.4 69.3 DaRe+RE (CVPR18) [33] ResNet50 88.5 74.2 PSE+ECN (CVPR18) [36] ResNet50 90.4 80.5 HA-CNN (CVPR18) [26] ResNet50 91.2 75.7 DuATM (CVPR18) [37] Inception-A 91.4 76.6 PCB+RPP (CVPR18) [38] ResNet50 93.8 81.6 MHN-PCB (ICCV19) [45] ResNet50 95.1 85.0 MGN (ACMMM18) [42] ResNet50 95.7 86.9 HPM (AAAI19) [66] ResNet50 94.2 82.7 AANet (CVPR19) [17] ResNet152 93.9 83.4 DCDS(ICCV19) [46] ResNet101 94.8 85.8 OSNet (ICCV19) [48] OSNeT 94.8 84.9 GCP (AAAI20) [47] ResNet50 94.8 88.0 SAN(AAAI20) [49] ResNet50 95.1 85.8 3DTANet (TCSVT20) [50] -95.3 86.9 HOReID (CVPR20) [51] ResNet50 94.2 84.9 RGA-CS(CVPR20) [ highest accuracy on mAP and only 0.3% improvement over our method. DukeMTMC-reID: It is another standard dataset that contains a sufficient number of images for deep learning.…”
Section: Methodsmentioning
confidence: 99%
See 2 more Smart Citations
“…Backbone Market-1501 Rank-1 mAP DaRe (CVPR18) [33] ResNet50 86.4 69.3 DaRe+RE (CVPR18) [33] ResNet50 88.5 74.2 PSE+ECN (CVPR18) [36] ResNet50 90.4 80.5 HA-CNN (CVPR18) [26] ResNet50 91.2 75.7 DuATM (CVPR18) [37] Inception-A 91.4 76.6 PCB+RPP (CVPR18) [38] ResNet50 93.8 81.6 MHN-PCB (ICCV19) [45] ResNet50 95.1 85.0 MGN (ACMMM18) [42] ResNet50 95.7 86.9 HPM (AAAI19) [66] ResNet50 94.2 82.7 AANet (CVPR19) [17] ResNet152 93.9 83.4 DCDS(ICCV19) [46] ResNet101 94.8 85.8 OSNet (ICCV19) [48] OSNeT 94.8 84.9 GCP (AAAI20) [47] ResNet50 94.8 88.0 SAN(AAAI20) [49] ResNet50 95.1 85.8 3DTANet (TCSVT20) [50] -95.3 86.9 HOReID (CVPR20) [51] ResNet50 94.2 84.9 RGA-CS(CVPR20) [ highest accuracy on mAP and only 0.3% improvement over our method. DukeMTMC-reID: It is another standard dataset that contains a sufficient number of images for deep learning.…”
Section: Methodsmentioning
confidence: 99%
“…Attention mechanism has also been adopted to address the re-ID task [17], [56], [55], [58], [45], [38], [50] hope to find out more discriminative features (color or texture) in the image through attention machanism, and ignore the other features (background) which are irrelevant to task. Sun et al [27] simply used the attention mechanism to locate the visible areas in a given image through a positioning method and learned their local features based on these visible areas.…”
Section: Attention Mechanism For Person Re-idmentioning
confidence: 99%
See 1 more Smart Citation
“…Currently, a majority of re-ID methods are based on supervised learning and have achieved impressive performance, which require a large amount of labeled data. With the developments of deep learning, methods based on convolutional neural network (CNN) [3], [22]- [25] have almost dominated this field. These methods mostly focus on how to extract more discriminative features to improve the recognition accuracy.…”
Section: Related Work a Supervised Person Re-identificationmentioning
confidence: 99%
“…Huang et al [30] proposes a method to spatially pay attention to the region of interest, and the discriminative information of the image will be magnified. Also, Huang et al [25] propose 3-Dimension Transmissible Attention (3DTA) that cooperatively utilizes channel attention and spatial attention, with a group loss to optimize the feature distances. In [31], relation network puts forward a global attention mechanism with relation-aware and has achieved superior performance.…”
Section: Related Work a Supervised Person Re-identificationmentioning
confidence: 99%