2020
DOI: 10.1007/s10514-020-09922-z
|View full text |Cite
|
Sign up to set email alerts
|

Exploration of the applicability of probabilistic inference for learning control in underactuated autonomous underwater vehicles

Abstract: Underwater vehicles are employed in the exploration of dynamic environments where tuning of a specific controller for each task would be time-consuming and unreliable as the controller depends on calculated mathematical coefficients in idealised conditions. For such a case, learning task from experience can be a useful alternative. This paper explores the capability of probabilistic inference learning to control autonomous underwater vehicles that can be used for different tasks without re-programming the cont… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
1
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
7

Relationship

0
7

Authors

Journals

citations
Cited by 10 publications
(2 citation statements)
references
References 33 publications
0
1
0
Order By: Relevance
“…Jiang et al [21] also used the DDPG algorithm to control three degrees of freedom of the underwater vehicle and realized the uniform linear motion of the underwater vehicle. Other scholars have realized the control of 5-DOF AUV by improving deep RL [22]. Some researchers used the DQN algorithm and PPO algorithm to realize collision avoidance and multi-position tracking of AUV [23].…”
Section: The Related Work Of Control Based On Reinforcement Learningmentioning
confidence: 99%
“…Jiang et al [21] also used the DDPG algorithm to control three degrees of freedom of the underwater vehicle and realized the uniform linear motion of the underwater vehicle. Other scholars have realized the control of 5-DOF AUV by improving deep RL [22]. Some researchers used the DQN algorithm and PPO algorithm to realize collision avoidance and multi-position tracking of AUV [23].…”
Section: The Related Work Of Control Based On Reinforcement Learningmentioning
confidence: 99%
“…This approach enables precise and efficient control optimization with only a few experiments [22], [23]. Previous studies have applied PILCO to different engineering disciplines, including an autopilot underwater vehicle control problem [24], [25], constant force control of a robot surface [26], autonomous optimization of PID parameters in the control of a flight attitude simulator [27], and experiments on a planetary gear transmission shift console frame [28]. These studies have shown that PILCO can obtain the ideal controller strategy for the gearbox with a relatively small number of experiments.…”
Section: Introductionmentioning
confidence: 99%