2016
DOI: 10.48550/arxiv.1606.04671
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Progressive Neural Networks

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

2
629
1

Year Published

2018
2018
2023
2023

Publication Types

Select...
4
3
1

Relationship

0
8

Authors

Journals

citations
Cited by 399 publications
(632 citation statements)
references
References 0 publications
2
629
1
Order By: Relevance
“…The sample is first assigned to the proper task at inference time, and the corresponding model version is used. In (PNN) [Rusu et al, 2016], (DEN) [Yoon et al, 2018], and (RCL) [Xu and Zhu, 2018] new structural elements are added to the model for each new task, while in [Masse et al, 2018;Golkar et al, 2019;Wortsman et al, 2020] a large model is considered from which submodels are selected for subsequent tasks. Methods in this category exhibit high accuracy in a task incremental scenario when test samples are given with a corresponding task index [van de Ven and Tolias, 2019].…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…The sample is first assigned to the proper task at inference time, and the corresponding model version is used. In (PNN) [Rusu et al, 2016], (DEN) [Yoon et al, 2018], and (RCL) [Xu and Zhu, 2018] new structural elements are added to the model for each new task, while in [Masse et al, 2018;Golkar et al, 2019;Wortsman et al, 2020] a large model is considered from which submodels are selected for subsequent tasks. Methods in this category exhibit high accuracy in a task incremental scenario when test samples are given with a corresponding task index [van de Ven and Tolias, 2019].…”
Section: Related Workmentioning
confidence: 99%
“…Continual learning (CL) is a machine learning domain that aims to mitigate catastrophic forgetting and enable models to be trained with an incoming stream of training data. This is usually achieved through regularization [Kirkpatrick et al, 2017], adaptation of model's architecture [Rusu et al, 2016] or replay of previous data examples. Typically, methods based on replay buffer achieve the best performance due to the high * Contact Author Figure 1: Cats are lazy and don't like to walk too much.…”
Section: Introductionmentioning
confidence: 99%
“…This property is best showcased in the work (Eysenbach et al, 2018), where they learn diverse skills without any reward function. Furthermore, sequential learning and the need to retain previously known skills has always been a focus (Rusu et al, 2016;Kirkpatrick et al, 2017). In the space of multi-task reinforcement learning with neural networks, Teh et al (2017) proposed a framework that allows sharing of knowledge across tasks via a task agnostic prior.…”
Section: Multi-task Reinforcement Learningmentioning
confidence: 99%
“…According to the mechanism of memory consolidation, current approaches are categorized into three types: (i) Experiential rehearsal-based approaches, which focus on replaying episodic memory (Robins 1995), and the core of which is to select representative samples or features from historical data (Rebuffi et al 2017;Aljundi et al 2019;Bang et al 2021). (ii) Distributed memory representation approaches (Fernando et al 2017;Mallya and Lazebnik 2018), which allocate individual networks for specific knowledge to avoid interference, represented by Progressive Neural Networks (PNN) (Rusu et al 2016).…”
Section: Related Workmentioning
confidence: 99%
“…(Right): Similarly, the experiment was performed on CIFAR-10 for the first two tasks. The f eature transf er, i.e., Progressive Neural Networks (Rusu et al 2016), is a feature transfer-based method. Compared with w/o transf er, it transfers features from the model of previous tasks to the current task's learning.…”
Section: Introductionmentioning
confidence: 99%