2020
DOI: 10.48550/arxiv.2007.09357
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Temporal Complementary Learning for Video Person Re-Identification

Abstract: This paper proposes a Temporal Complementary LearningNetwork that extracts complementary features of consecutive video frames for video person re-identification. Firstly, we introduce a Temporal Saliency Erasing (TSE) module including a saliency erasing operation and a series of ordered learners. Specifically, for a specific frame of a video, the saliency erasing operation drives the specific learner to mine new and complementary parts by erasing the parts activated by previous frames. Such that the diverse vi… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2021
2021
2021
2021

Publication Types

Select...
3

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(1 citation statement)
references
References 48 publications
0
1
0
Order By: Relevance
“…Thus, in video-based person Re-ID, some existing works [22,39,11,40] concentrate on extracting attentive spatial features in the spatial view. Meanwhile, some methods [26,7,25,17] attempt to obtain temporal observations by temporal learning mechanisms. Besides, some approaches [23,21,34,13] utilize 3D-CNN to jointly explore spatial-temporal cues.…”
Section: Related Work 21 Video-based Person Re-identificationmentioning
confidence: 99%
“…Thus, in video-based person Re-ID, some existing works [22,39,11,40] concentrate on extracting attentive spatial features in the spatial view. Meanwhile, some methods [26,7,25,17] attempt to obtain temporal observations by temporal learning mechanisms. Besides, some approaches [23,21,34,13] utilize 3D-CNN to jointly explore spatial-temporal cues.…”
Section: Related Work 21 Video-based Person Re-identificationmentioning
confidence: 99%