2020 American Control Conference (ACC) 2020
DOI: 10.23919/acc45564.2020.9147901
|View full text |Cite
|
Sign up to set email alerts
|

Exchangeable Input Representations for Reinforcement Learning

Abstract: Poor sample efficiency is a major limitation of deep reinforcement learning in many domains. This work presents an attention-based method to project neural network inputs into an efficient representation space that is invariant under changes to input ordering. We show that our proposed representation results in an input space that is a factor of m! smaller for inputs of m objects. We also show that our method is able to represent inputs over variable numbers of objects. Our experiments demonstrate improvements… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2021
2021
2021
2021

Publication Types

Select...
2

Relationship

1
1

Authors

Journals

citations
Cited by 2 publications
(1 citation statement)
references
References 5 publications
0
1
0
Order By: Relevance
“…3 using attention mechanisms [38] to accommodate the large input space without significant degradation of explanatory capacity or intractable growth in size. Attention mechanisms have been shown to improve learning efficiency on tasks with exchangeable inputs [39], [40].…”
Section: Neural Networkmentioning
confidence: 99%
“…3 using attention mechanisms [38] to accommodate the large input space without significant degradation of explanatory capacity or intractable growth in size. Attention mechanisms have been shown to improve learning efficiency on tasks with exchangeable inputs [39], [40].…”
Section: Neural Networkmentioning
confidence: 99%