2021
DOI: 10.3390/ai2040029
|View full text |Cite
|
Sign up to set email alerts
|

Refined Continuous Control of DDPG Actors via Parametrised Activation

Abstract: Continuous action spaces impose a serious challenge for reinforcement learning agents. While several off-policy reinforcement learning algorithms provide a universal solution to continuous control problems, the real challenge lies in the fact that different actuators feature different response functions due to wear and tear (in mechanical systems) and fatigue (in biomechanical systems). In this paper, we propose enhancing the actor-critic reinforcement learning agents by parameterising the final layer in the a… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
3
1

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(1 citation statement)
references
References 29 publications
0
1
0
Order By: Relevance
“…Despite the fact that DQN effectively solved issues in high-dimensional state spaces but continuous action spaces are still difficult to handle by DQN [64]. DDPG is proposed to expand DRL algorithms to continuous action spaces [65]. As shown in Figure 12, the presented DDPG model includes two networks, an actor network and a critic network.…”
Section: A Mdp Modelmentioning
confidence: 99%
“…Despite the fact that DQN effectively solved issues in high-dimensional state spaces but continuous action spaces are still difficult to handle by DQN [64]. DDPG is proposed to expand DRL algorithms to continuous action spaces [65]. As shown in Figure 12, the presented DDPG model includes two networks, an actor network and a critic network.…”
Section: A Mdp Modelmentioning
confidence: 99%