2022
DOI: 10.48550/arxiv.2205.11104
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Generalization, Mayhems and Limits in Recurrent Proximal Policy Optimization

Abstract: At first sight it may seem straightforward to use recurrent layers in Deep Reinforcement Learning algorithms to enable agents to make use of memory in the setting of partially observable environments. Starting from widely used Proximal Policy Optimization (PPO), we highlight vital details that one must get right when adding recurrence to achieve a correct and efficient implementation, namely: properly shaping the neural net's forward pass, arranging the training data, correspondingly selecting hidden states fo… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...

Citation Types

0
0
0

Publication Types

Select...

Relationship

0
0

Authors

Journals

citations
Cited by 0 publications
references
References 9 publications
0
0
0
Order By: Relevance

No citations

Set email alert for when this publication receives citations?