2022
DOI: 10.1609/aaai.v36i1.19949
|View full text |Cite
|
Sign up to set email alerts
|

Lifelong Person Re-identification by Pseudo Task Knowledge Preservation

Abstract: In real world, training data for person re-identification (Re-ID) is collected discretely with spatial and temporal variations, which requires a model to incrementally learn new knowledge without forgetting old knowledge. This problem is called lifelong person re-identification (LReID). Variations of illumination and background for images of each task exhibit task-specific image style and lead to task-wise domain gap. In addition to missing data from the old tasks, task-wise domain gap is a key factor for cata… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
5
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
5
1

Relationship

0
6

Authors

Journals

citations
Cited by 6 publications
(5 citation statements)
references
References 41 publications
0
5
0
Order By: Relevance
“…To overcome this problem, various methods have been proposed which can be mainly categorized into two branches: data replay-based methods and knowledge distillation-based ones. The data reply-based approaches aim to prevent knowledge forgetting via storing and replaying exemplars from historical datasets (Wu and Gong 2021;Ge et al 2022;Yu et al 2023;Chen, Lagadec, and Bremond 2022;Huang et al 2022). However, such a strategy tends to hinder data privacy and incur substantial computational overheads.…”
Section: Lifelong Person Re-identificationmentioning
confidence: 99%
See 4 more Smart Citations
“…To overcome this problem, various methods have been proposed which can be mainly categorized into two branches: data replay-based methods and knowledge distillation-based ones. The data reply-based approaches aim to prevent knowledge forgetting via storing and replaying exemplars from historical datasets (Wu and Gong 2021;Ge et al 2022;Yu et al 2023;Chen, Lagadec, and Bremond 2022;Huang et al 2022). However, such a strategy tends to hinder data privacy and incur substantial computational overheads.…”
Section: Lifelong Person Re-identificationmentioning
confidence: 99%
“…The knowledge distillation technique is widely used in LReID (Pu et al 2021;Wu and Gong 2021;Ge et al 2022;Sun and Mu 2022) by forcing the new model to generate consistent outputs as the old model. (Pu et al 2021) was one of the initial works that introduced Knowledge distillation into the LReID task and adopted logistic distillation which forced the new model to generate the same classification score as the old model.…”
Section: Lifelong Person Re-identificationmentioning
confidence: 99%
See 3 more Smart Citations