2022
DOI: 10.48550/arxiv.2205.03307
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Forget Less, Count Better: A Domain-Incremental Self-Distillation Learning Benchmark for Lifelong Crowd Counting

Abstract: Crowd Counting has important applications in public safety and pandemic control. A robust and practical crowd counting system has to be capable of continuously learning with the new-coming domain data in real-world scenarios instead of fitting one domain only. Off-the-shelf methods have some drawbacks to handle multiple domains. 1) The models will achieve limited performance (even drop dramatically) among old domains after training images from new domains due to the discrepancies of intrinsic data distribution… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2023
2023
2023
2023

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(2 citation statements)
references
References 42 publications
(77 reference statements)
0
2
0
Order By: Relevance
“…A relevant example of domain-incremental learning in the real world is an agent that needs to learn to survive in different environments. Classic scenes include autonomous driving [23,32], person ReID [33], and crowd counting [34], among others [35]. For instance, Garg et al [23] proposed a dynamic semantic segmentation model, which is effective in three driving scenes from visually disparate geographical regions.…”
Section: Domain-incremental Learningmentioning
confidence: 99%
See 1 more Smart Citation
“…A relevant example of domain-incremental learning in the real world is an agent that needs to learn to survive in different environments. Classic scenes include autonomous driving [23,32], person ReID [33], and crowd counting [34], among others [35]. For instance, Garg et al [23] proposed a dynamic semantic segmentation model, which is effective in three driving scenes from visually disparate geographical regions.…”
Section: Domain-incremental Learningmentioning
confidence: 99%
“…In accordance with the evaluation metrics used in studies such as [19,23,34,43], we utilize ∆ m and BWT to evaluate the performance of our incremental learning model, DILRS. Specifically, ∆ m measures the average performance degradation compared to the single-task baseline b:…”
Section: Implementation Detailsmentioning
confidence: 99%