2022
DOI: 10.48550/arxiv.2203.04297
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Rényi State Entropy for Exploration Acceleration in Reinforcement Learning

Abstract: One of the most critical challenges in deep reinforcement learning is to maintain the long-term exploration capability of the agent. To tackle this problem, it has been recently proposed to provide intrinsic rewards for the agent to encourage exploration. However, most existing intrinsic rewardbased methods proposed in the literature fail to provide sustainable exploration incentives, a problem known as vanishing rewards. In addition, these conventional methods incur complex models and additional memory in the… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2023
2023
2023
2023

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(1 citation statement)
references
References 21 publications
(34 reference statements)
0
1
0
Order By: Relevance
“…In the study presented in [21], the focus was on maximizing the Renyi entropy over the stateaction space to enhance exploration, particularly in a rewardfree reinforcement learning setting. This approach was further advanced and extended to a reward-based reinforcement learning framework by [22]. They introduced Renyi entropy maximization within the state space to augment exploration.…”
Section: Introductionmentioning
confidence: 99%
“…In the study presented in [21], the focus was on maximizing the Renyi entropy over the stateaction space to enhance exploration, particularly in a rewardfree reinforcement learning setting. This approach was further advanced and extended to a reward-based reinforcement learning framework by [22]. They introduced Renyi entropy maximization within the state space to augment exploration.…”
Section: Introductionmentioning
confidence: 99%