Proceedings of the Genetic and Evolutionary Computation Conference Companion 2019
DOI: 10.1145/3319619.3322044
|View full text |Cite
|
Sign up to set email alerts
|

Towards continual reinforcement learning through evolutionary meta-learning

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
6
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
4
1

Relationship

1
4

Authors

Journals

citations
Cited by 6 publications
(6 citation statements)
references
References 2 publications
0
6
0
Order By: Relevance
“…This approach was first introduced by Finn et al (2017) and is called Model-Agnostic Meta-Learning (MAML). In the approach presented in this paper, we use an evolutionary meta-learning variant, in which evolution is trying to find good initial neural network parameters that allow an inner RL loop to adapt quickly (Fernando et al, 2018;Grbic and Risi, 2019).…”
Section: Meta-learningmentioning
confidence: 99%
See 2 more Smart Citations
“…This approach was first introduced by Finn et al (2017) and is called Model-Agnostic Meta-Learning (MAML). In the approach presented in this paper, we use an evolutionary meta-learning variant, in which evolution is trying to find good initial neural network parameters that allow an inner RL loop to adapt quickly (Fernando et al, 2018;Grbic and Risi, 2019).…”
Section: Meta-learningmentioning
confidence: 99%
“…The question here is how to train an instinctual network that keeps the agent out of harm's way together with a policy network that should be able to adapt quickly to new goals. One of the main insights in the work presented here is that we can use an evolutionary meta-learning approach (Fernando et al, 2018;Grbic and Risi, 2019) to train a policy that can adapt quickly and safely to different tasks. The whole training procedure runs two training loops: an evolutionary outer loop, and a task-adaptation inner loop (Alg.1 and Fig.…”
Section: Meta-trainingmentioning
confidence: 99%
See 1 more Smart Citation
“…One popular approach to address this problem is fewshot learning, in particular metalearning, either by utilizing gradients (Schmidhuber 1987;Thrun and Pratt 1998;Finn, Abbeel, and Levine 2017) or evolutionary procedures (Fernando et al 2018;Grbic and Risi 2019). In metalearning, systems are trained by exposing them to a large number of tasks, and then tested for their ability to learn relevant but previously unseen tasks.…”
Section: Introductionmentioning
confidence: 99%
“…One popular approach to address this problem is few-shot learning, in particular metalearning, either by utilizing gradients [7,21,25] or evolutionary procedures [6,10]. In metalearning, systems are trained by exposing them to a large number of tasks, and then tested for their ability to learn new relevant but unseen tasks.…”
Section: Introductionmentioning
confidence: 99%