2022
DOI: 10.1007/978-3-031-17587-9_7
|View full text |Cite
|
Sign up to set email alerts
|

SPeCiaL: Self-supervised Pretraining for Continual Learning

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
30
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
3
1
1
1

Relationship

0
6

Authors

Journals

citations
Cited by 21 publications
(41 citation statements)
references
References 4 publications
0
30
0
Order By: Relevance
“…(i) Regularization-based approaches modify the classification objective to preserve previously learned representations or encourage more meaningful representations e.g. DER [4], ACE [5], and CoPE [13]. (ii) Sampling-based techniques focus on the optimal selection and storing of the most representative replay memory during online training, e.g.…”
Section: Related Workmentioning
confidence: 99%
See 4 more Smart Citations
“…(i) Regularization-based approaches modify the classification objective to preserve previously learned representations or encourage more meaningful representations e.g. DER [4], ACE [5], and CoPE [13]. (ii) Sampling-based techniques focus on the optimal selection and storing of the most representative replay memory during online training, e.g.…”
Section: Related Workmentioning
confidence: 99%
“…Second, these benchmarks are incapable of measuring whether models can rapidly adapt to new data under a fast-changing distribution shift, which is one of the main problems in classical online learning literature [41]. There have been efforts to address the second limitation by using new metrics that measure test accuracy more frequently during training [5,24]. However, these metrics capture the adaptation to held-out test data rather than to the incoming future data.…”
Section: Related Workmentioning
confidence: 99%
See 3 more Smart Citations