2021
DOI: 10.48550/arxiv.2110.09712
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Balancing Value Underestimation and Overestimation with Realistic Actor-Critic

Abstract: Model-free deep reinforcement learning (RL) has been successfully applied to challenging continuous control domains. However, poor sample efficiency prevents these methods from being widely used in real-world domains. This paper introduces a novel model-free algorithm, Realistic Actor-Critic(RAC), which can be incorporated with any off-policy RL algorithms to improve sample efficiency. RAC employs Universal Value Function Approximators (UVFA) to simultaneously learn a policy family with the same neural network… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...

Citation Types

0
0
0

Publication Types

Select...

Relationship

0
0

Authors

Journals

citations
Cited by 0 publications
references
References 31 publications
0
0
0
Order By: Relevance

No citations

Set email alert for when this publication receives citations?