2022
DOI: 10.36227/techrxiv.19679454
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Efficient Distributional Reinforcement Learning with Kullback-Leibler Divergence Regularization

Abstract: In this article, we address the issues of stability and data-efficiency in reinforcement learning (RL). A novel RL approach, Kullback–Leibler divergence-regularized distributional RL (KLC51) is proposed to integrate the advantages of both stability in the distributional RL and data-efficiency in the Kullback-Leibler (KL) divergence-regularized RL in one framework. KLC51 derived the Bellman equation and the TD errors regularized by KL divergence in a distributional perspective and explored the approximated str… Show more

Help me understand this report
View published versions

Search citation statements

Order By: Relevance

Paper Sections

Select...

Citation Types

0
0
0

Publication Types

Select...

Relationship

0
0

Authors

Journals

citations
Cited by 0 publications
references
References 11 publications
(16 reference statements)
0
0
0
Order By: Relevance

No citations

Set email alert for when this publication receives citations?