2021
DOI: 10.48550/arxiv.2110.15701
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Successor Feature Representations

Abstract: Transfer in Reinforcement Learning aims to improve learning performance on target tasks using knowledge from experienced source tasks. Successor features (SF) are a prominent transfer mechanism in domains where the reward function changes between tasks. They reevaluate the expected return of previously learned policies in a new target task and to transfer their knowledge. A limiting factor of the SF framework is its assumption that rewards linearly decompose into successor features and a reward weight vector. … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2022
2022
2022
2022

Publication Types

Select...
1

Relationship

1
0

Authors

Journals

citations
Cited by 1 publication
(1 citation statement)
references
References 23 publications
(30 reference statements)
0
1
0
Order By: Relevance
“…Environment description. We evaluated the algorithms in an additional non-social task with complex, non-linear reward functions, called the racer environment [49]. The agent has to navigate in a continuous two dimensional scene for two hundred time steps (Fig.…”
Section: Racer Environmentmentioning
confidence: 99%
“…Environment description. We evaluated the algorithms in an additional non-social task with complex, non-linear reward functions, called the racer environment [49]. The agent has to navigate in a continuous two dimensional scene for two hundred time steps (Fig.…”
Section: Racer Environmentmentioning
confidence: 99%