2019 IEEE/CVF International Conference on Computer Vision (ICCV) 2019
DOI: 10.1109/iccv.2019.00065
|View full text |Cite
|
Sign up to set email alerts
|

Co-Segmentation Inspired Attention Networks for Video-Based Person Re-Identification

Abstract: Video based computer vision tasks can benefit from estimation of the salient regions and interactions between those regions. Traditionally, this has been done by identifying the object regions in the images by utilizing pre-trained models to perform object detection, object segmentation and/or object pose estimation. Though using pre-trained models seems to be a viable approach, it is infeasible in practice due to the need for exhaustive annotation of object categories, domain gap between datasets and bias pre… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
79
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
4
2
1

Relationship

0
7

Authors

Journals

citations
Cited by 106 publications
(83 citation statements)
references
References 100 publications
(143 reference statements)
0
79
0
Order By: Relevance
“…We compare the proposed ASTA-Net against sixteen state-of-the-art methods on the MARS dataset in terms of CMC accuracy and mAP score, with the results shown in Table 1. The compared methods belong to two categories, i.e., image-based person reidentification methods including PartsNet [17], Re-ranking [51], MGCAM [38], Triplet [8], and video-based person reidentification methods including JST-RNN [53], QAN [29], DuATM [37], SpaAtn [20], RRU [30], M3D [19], COSAM [39], Snippet [1], STA [4], ADFA [49], GLTR [18], VRSTC [11]. As shown in Table 1, the proposed ASTA-Net achieves 90.4% rank-1 recognition rate and 84.1% mAP score.…”
Section: Comparison To State-of-the-artsmentioning
confidence: 99%
“…We compare the proposed ASTA-Net against sixteen state-of-the-art methods on the MARS dataset in terms of CMC accuracy and mAP score, with the results shown in Table 1. The compared methods belong to two categories, i.e., image-based person reidentification methods including PartsNet [17], Re-ranking [51], MGCAM [38], Triplet [8], and video-based person reidentification methods including JST-RNN [53], QAN [29], DuATM [37], SpaAtn [20], RRU [30], M3D [19], COSAM [39], Snippet [1], STA [4], ADFA [49], GLTR [18], VRSTC [11]. As shown in Table 1, the proposed ASTA-Net achieves 90.4% rank-1 recognition rate and 84.1% mAP score.…”
Section: Comparison To State-of-the-artsmentioning
confidence: 99%
“…It's really a challenging problem since there exist a lot of variations like view point, background clutter, occlusion, misalignment, etc. There have been lots of works focusing on two branches for the problem, one is image-based person ReID [12][13][14][15][16][17][18][19][20][21], the other one is video-based [8,10,[22][23][24][25][26][27][28][29][30][31][32][33][34].…”
Section: A Person Re-identificationmentioning
confidence: 99%
“…They also provided a spatial and temporal efficient method to reduce FLOPs (floating-point operations per second) for saving computation resources. Subramaniam et al formulated a Co-segmentation based Attention Module (COSAM) [10] for video-based person ReID. The module could help to extract a common set of salient feature maps among video frames through a Normalized Cross Correlation (NCC) layer and a summarization layer.…”
Section: A Person Re-identificationmentioning
confidence: 99%
See 2 more Smart Citations