2023 IEEE/CVF Winter Conference on Applications of Computer Vision (WACV) 2023
DOI: 10.1109/wacv56688.2023.00524
|View full text |Cite
|
Sign up to set email alerts
|

Saliency Guided Experience Packing for Replay in Continual Learning

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
4
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
3
2
1

Relationship

0
6

Authors

Journals

citations
Cited by 8 publications
(4 citation statements)
references
References 30 publications
0
4
0
Order By: Relevance
“…In this way, we could choose the smallest number of samples that contain the most influential information about each task. Several works in the literature propose methods for mitigating catastrophic forgetting in artificial neural networks that exploit explainable AI [32,33,34], and they manage to achieve comparable and, in some cases, higher performances than other state-of-the-art approaches.…”
Section: Discussionmentioning
confidence: 99%
“…In this way, we could choose the smallest number of samples that contain the most influential information about each task. Several works in the literature propose methods for mitigating catastrophic forgetting in artificial neural networks that exploit explainable AI [32,33,34], and they manage to achieve comparable and, in some cases, higher performances than other state-of-the-art approaches.…”
Section: Discussionmentioning
confidence: 99%
“…These methods retrieve samples from the memory buffer and combine them with the incoming data for model updates. Various strategies exist in this category, including experience replay (ER) [22], maximally interfered retrieval (MIR) [28], adversarial Shapley value experience replay (ASER) [23], gradient coreset replay (GCR) [24], and experience packing and replay (EPR) [25]. However, these methods primarily focus on data storage and may not fully consider the impact of new data, potentially leading to a decrease in classification performance.…”
Section: Methods Sampling Strategymentioning
confidence: 99%
“…There are three main continual learning approaches: regularization [14]- [16], parameter isolation [17]- [19], and replay-based learning methods [22]- [25]. Among them, replay-based learning methods are simple and effective and have been widely used in online class-incremental learning studies.…”
Section: Introductionmentioning
confidence: 99%
“…One line of works (Kirkpatrick et al 2017;Schwarz et al 2018;Ebrahimi et al 2020;Saha et al 2021a;Kao et al 2021) achieves this goal by penalizing or preventing changes to the most important weights of the model for the past tasks while learning new tasks. Other works minimize forgetting by either storing samples from old tasks in the memory (Robins 1995;Lopez-Paz and Ranzato 2017;Chaudhry et al 2019bChaudhry et al , 2021Saha and Roy 2023) or synthesizing old data by generative models (Shin et al 2017) for rehearsal. Despite varying degrees of success, the stability-plasticity balance in such methods breaks down under long sequence of learning.…”
Section: Introductionmentioning
confidence: 99%