2019
DOI: 10.1016/j.patcog.2018.08.015
|View full text |Cite
|
Sign up to set email alerts
|

Attention driven person re-identification

Abstract: Person re-identification (ReID) is a challenging task due to arbitrary human pose variations, background clutters, etc. It has been studied extensively in recent years, but the multifarious local and global features are still not fully exploited by either ignoring the interplay between whole-body images and body-part images or missing in-depth examination of specific body-part images. In this paper, we propose a novel attention-driven multi-branch network that learns robust and discriminative human representat… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
58
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
4
2
1

Relationship

0
7

Authors

Journals

citations
Cited by 159 publications
(63 citation statements)
references
References 66 publications
(143 reference statements)
0
58
0
Order By: Relevance
“…For example, (Cai et al, 2019) utilized body masks to guide the training of attention module. (Yang et al, 2019) proposed an end-to-end trainable framework composed of local and fusion attention modules that can incorporate im-age partition using human key-points estimation. Our proposed MRFA module is designed to address the imperfect detection issue mentioned above.…”
Section: Related Workmentioning
confidence: 99%
See 2 more Smart Citations
“…For example, (Cai et al, 2019) utilized body masks to guide the training of attention module. (Yang et al, 2019) proposed an end-to-end trainable framework composed of local and fusion attention modules that can incorporate im-age partition using human key-points estimation. Our proposed MRFA module is designed to address the imperfect detection issue mentioned above.…”
Section: Related Workmentioning
confidence: 99%
“…DukeMTMC-reID rank 1 mAP rank 1 mAP SVDNet 82.3 62.1 76.7 56.8 PAN 82.8 63.4 71.6 51.5 MultiScale (Chen et al, 2017) 88.9 73.1 79.2 60.6 MLFN (Chang et al, 2018) 90.0 74.3 81.0 62.8 HA-CNN (Li et al, 2018) 91.2 75.7 80.5 63.8 Mancs (Wang et al, 2018a) 93.1 82.3 84.9 71.8 Attention-Driven (Yang et al, 2019) 94.9 86.4 86.0 74.5 PCB+RPP (Sun et al, 2018) 93.8 81.6 83.3 69.2 HPM (Fu et al, 2018) 94.2 82.7 86.6 74.3 MGN (Wang et al, 2018b) 95.7 86.9 88.7 78.4 VMRFANet(Ours) 95.5 88.1 88.9 80.0 Table 3: Comparison of results on CUHK03-labeled (CUHK03-L) and CUHK03-detected (CUHK03-D) with new protocol (Zhong et al, 2017a). The best results are in bold, while the numbers with underlines denote the second best.…”
Section: Market1501mentioning
confidence: 99%
See 1 more Smart Citation
“…Visual attention has shown its success in re-ID tasks [42,22,29,19], as the mechanism conforms to the human visual system that a whole image is not likely to be processed in its entirety at once, but only the salient parts of the whole visual space are focused when and where needed. Visual attention module can help to extract dynamic features from salient parts mostly like human body parts in a image by guiding the learning towards informative image regions [29].…”
Section: Introductionmentioning
confidence: 99%
“…Visual attention module can help to extract dynamic features from salient parts mostly like human body parts in a image by guiding the learning towards informative image regions [29]. Given the human body information, attention maps where regions of interest are presented have much stronger responses on body region compared with background regions [22,42]. Inspired by this, whole human body mask has been used to guide the attention model training [29].…”
Section: Introductionmentioning
confidence: 99%