2022
DOI: 10.1142/s2301385023310027
|View full text |Cite
|
Sign up to set email alerts
|

Reinforcement Learning Applications in Unmanned Vehicle Control: A Comprehensive Overview

Abstract: This paper briefly reviews the dynamics and the control architectures of unmanned vehicles; reinforcement learning (RL) in optimal control theory; and RL-based applications in unmanned vehicles. Nonlinearities and uncertainties in the dynamics of unmanned vehicles (e.g. aerial, underwater, and tailsitter vehicles) pose critical challenges to their control systems. Solving Hamilton–Jacobi–Bellman (HJB) equations to find optimal controllers becomes difficult in the presence of nonlinearities, uncertainties, and … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
5

Citation Types

0
2
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
5
2
1

Relationship

0
8

Authors

Journals

citations
Cited by 9 publications
(5 citation statements)
references
References 50 publications
0
2
0
Order By: Relevance
“…Recent years has witnessed dramatic progress of reinforcement learning (RL) and multi-agent reinforcement learning (MARL) in real life applications, such as unmanned vehicles (Liu et al 2022), traffic signal control (Noaeen et al 2022) and on-demand delivery (Wang et al 2023). Taking advantage of the centralized training decentralized execution (CTDE) (Oliehoek, Spaan, and Vlassis 2008; Kraemer and Banerjee 2016) paradigm, current cooperative MARL methods (Du et al 2023;Wang et al 2020a,b;Peng et al 2021;Zhang et al 2021;Zhou, Lan, and Aggarwal 2022) adopt value function factorization or a centralized critic to provide centralized learning signals to promote cooperation and achieve implicit or explicit credit assignment.…”
Section: Introductionmentioning
confidence: 99%
“…Recent years has witnessed dramatic progress of reinforcement learning (RL) and multi-agent reinforcement learning (MARL) in real life applications, such as unmanned vehicles (Liu et al 2022), traffic signal control (Noaeen et al 2022) and on-demand delivery (Wang et al 2023). Taking advantage of the centralized training decentralized execution (CTDE) (Oliehoek, Spaan, and Vlassis 2008; Kraemer and Banerjee 2016) paradigm, current cooperative MARL methods (Du et al 2023;Wang et al 2020a,b;Peng et al 2021;Zhang et al 2021;Zhou, Lan, and Aggarwal 2022) adopt value function factorization or a centralized critic to provide centralized learning signals to promote cooperation and achieve implicit or explicit credit assignment.…”
Section: Introductionmentioning
confidence: 99%
“…Optimal control [13] is crucial in the modern control field because it can determine the best operating scheme for a system under given constraints, thus achieving optimal system performance. There are numerous applications of optimal control across different domains, such as aerospace [14], automotive industry [15], and unmanned aerial [16], playing a significant role in enhancing system performance, reducing energy consumption, and optimizing resource utilization. In nonlinear systems, adaptive dynamic programming and reinforcement learning (RL) technologies play vital roles ( [17][18][19][20][21][22][23]).…”
Section: Introductionmentioning
confidence: 99%
“…In order to deal with unknown dynamics, reinforcement learning (RL) [5][6][7] and approximate dynamic programming (ADP) [8][9][10] have been developed. Recent applica-tions of RL can be found in areas such as autonomous vehicles and unmanned aerial vehicles [11,12]. In RL methods, solving to the Bellman optimality equation typically requires function approximation techniques.…”
Section: Introductionmentioning
confidence: 99%