2020
DOI: 10.1613/jair.1.11304
|View full text |Cite
|
Sign up to set email alerts
|

Using Task Descriptions in Lifelong Machine Learning for Improved Performance and Zero-Shot Transfer

Abstract: Knowledge transfer between tasks can improve the performance of learned models, but requires an accurate estimate of inter-task relationships to identify the relevant knowledge to transfer. These inter-task relationships are typically estimated based on training data for each task, which is inefficient in lifelong learning settings where the goal is to learn each consecutive task rapidly from as little data as possible. To reduce this burden, we develop a lifelong learning method based on coupled dictionary le… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
30
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
5
4
1

Relationship

4
6

Authors

Journals

citations
Cited by 23 publications
(30 citation statements)
references
References 21 publications
(28 reference statements)
0
30
0
Order By: Relevance
“…To our knowledge, the only work somehow resembling ours is (Rostami et al, 2020), in which task descriptions were incorporated into lifelong learning for zero-shot transfer. We differ in three aspects: (i) they focused on robot controlling problems, (ii) their tasks are from a single domain, and (iii) in addition to the associated instruction, they assumed that each task has a large number of labeled examples.…”
Section: Related Workmentioning
confidence: 99%
“…To our knowledge, the only work somehow resembling ours is (Rostami et al, 2020), in which task descriptions were incorporated into lifelong learning for zero-shot transfer. We differ in three aspects: (i) they focused on robot controlling problems, (ii) their tasks are from a single domain, and (iii) in addition to the associated instruction, they assumed that each task has a large number of labeled examples.…”
Section: Related Workmentioning
confidence: 99%
“…PG-ELLA follows the dictionary-learning mechanics of ELLA, but replaces the supervised models that form ELLA's dictionary by policy factors (Bou Ammar et al, 2014). An extension of this approach supports cross-domain transfer by projecting the dictionary onto domain-specific policy spaces (Bou Ammar et al, 2015), and another extension leverages task descriptors to achieve zero-shot transfer to unseen tasks (Isele et al, 2016;Rostami et al, 2020). Zhao et al ( 2017) followed a similar dictionary-learning formulation for deep networks, replacing all matrix operations with equivalent tensor operations.…”
Section: Lifelong Reinforcement Learningmentioning
confidence: 99%
“…The idea is to identify important weights that retain knowledge about a task and then consolidate them according to their relative importance for past tasks in the future. Continual learning of sequential tasks can be improved used high-level tasks descriptors to compensate for data scarcity Rostami et al (2020a).…”
Section: Related Workmentioning
confidence: 99%