2018
DOI: 10.48550/arxiv.1803.08089
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Incremental Learning-to-Learn with Statistical Guarantees

Abstract: In learning-to-learn the goal is to infer a learning algorithm that works well on a class of tasks sampled from an unknown meta distribution. In contrast to previous work on batch learning-tolearn, we consider a scenario where tasks are presented sequentially and the algorithm needs to adapt incrementally to improve its performance on future tasks. Key to this setting is for the algorithm to rapidly incorporate new observations into the model as they arrive, without keeping them in memory. We focus on the case… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
14
0

Year Published

2018
2018
2022
2022

Publication Types

Select...
7
1

Relationship

1
7

Authors

Journals

citations
Cited by 13 publications
(14 citation statements)
references
References 14 publications
(49 reference statements)
0
14
0
Order By: Relevance
“…[14,19,24,26,10]. We also note that there are analyses for other representation learning schemes [5,31,20,2,15], which are beyond the scope of this paper.…”
Section: Related Workmentioning
confidence: 99%
“…[14,19,24,26,10]. We also note that there are analyses for other representation learning schemes [5,31,20,2,15], which are beyond the scope of this paper.…”
Section: Related Workmentioning
confidence: 99%
“…Inspired by MAML, a line of gradient-based meta-learning algorithms have been widely used in practice Nichol et al (2018); Al-Shedivat et al (2017); Jerfel et al (2018). Much follow-up work focused on online-setting with regret bounds (Denevi et al, 2018;Finn et al, 2019;Khodak et al, 2019;Balcan et al, 2015;Alquier et al, 2017;Bullins et al, 2019;Pentina & Lampert, 2014).…”
Section: Related Workmentioning
confidence: 99%
“…We show that fine-tuning quickly adapts to new tasks, requiring fewer samples in certain cases compared to methods using "frozen representation" objectives (as studied in Du et al (2020), and will be formalized in Section 2.2). To the best of our knowledge, no prior studies exist beyond fine-tuning a linear model (Denevi et al, 2018;Konobeev et al, 2020;Collins et al, 2020a;Lee et al, 2020) or only the task-specific layers (Du et al, 2020;Tripuraneni et al, 2020b,a;Mu et al, 2020). In particular, our work can be viewed as a continuation of the work presented in Tripuraneni et al (2020b), where the authors have acknowledged that the framework does not incorporate representation fine-tuning, and thus is a promising line of future work.…”
Section: Introductionmentioning
confidence: 99%
“…This thesis focuses on the first MAML algorithms, but the techniques here can be extended to analyze the Hessian-free multi-step MAML. Alternatively to meta-initialization algorithms such as MAML, meta-regularization approaches aim to learn a good bias for a regularized empirical risk minimization problem for intra-task learning [2,22,21,20,104,8,132]. [8] formalized a connection between meta-initialization and meta-regularization from an online learning perspective.…”
Section: Related Workmentioning
confidence: 99%