2022
DOI: 10.1016/j.engappai.2022.104966
|View full text |Cite
|
Sign up to set email alerts
|

Online Continual Learning via the Meta-learning update with Multi-scale Knowledge Distillation and Data Augmentation

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
2
1

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(1 citation statement)
references
References 55 publications
(72 reference statements)
0
1
0
Order By: Relevance
“…last fully-connected layer). In order to mitigate this drawback, a scheme of multi-scale knowledge distillation is adopted inspired by the work of [14]. More specifically, apart from the model's output each client transmits some intermediate outputs (i.e.…”
Section: Local Supervision Representation Learningmentioning
confidence: 99%
“…last fully-connected layer). In order to mitigate this drawback, a scheme of multi-scale knowledge distillation is adopted inspired by the work of [14]. More specifically, apart from the model's output each client transmits some intermediate outputs (i.e.…”
Section: Local Supervision Representation Learningmentioning
confidence: 99%