2022
DOI: 10.1002/asjc.2866
|View full text |Cite
|
Sign up to set email alerts
|

Optimized tracking control using reinforcement learning strategy for a class of nonlinear systems

Abstract: This paper is to develop a simplified optimized tracking control using reinforcement learning (RL) strategy for a class of nonlinear systems. Since the nonlinear control gain function is considered in the system modeling, it is challenging to extend the existing RL‐based optimal methods to the tracking control. The main reasons are that these methods' algorithm are very complex; meanwhile, they also require to meet some strict conditions. Different with these exiting RL‐based optimal methods that derive the ac… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
5
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
4

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(5 citation statements)
references
References 40 publications
0
5
0
Order By: Relevance
“…Theorem 2. Consider system (19) and system (22) with controller ( 19)-( 29) and ( 31)-( 33 The closed-loop system can be presented as follows.…”
Section: Stability Analysis Of Adrc Controllermentioning
confidence: 99%
See 1 more Smart Citation
“…Theorem 2. Consider system (19) and system (22) with controller ( 19)-( 29) and ( 31)-( 33 The closed-loop system can be presented as follows.…”
Section: Stability Analysis Of Adrc Controllermentioning
confidence: 99%
“…Ben Jabeur and Seddik [21] used neural networks to optimize PID controller parameters, thus improving the control effect. Yang and Li [22] proposed a simplified optimal tracking control method based on reinforcement learning strategy for nonlinear system control and proved its stability. Due to the black‐box characteristics of AI‐based methods, their interpretability is poor, and they require high performance of computing platforms.…”
Section: Introductionmentioning
confidence: 99%
“…Subsequently, the event-triggered control function in the event-triggered controller determines whether to update the OBCC strategy based on the values of system states and local neighbor bipartite consensus error. If the event-triggering condition is satisfied, the OBCC strategy is updated using (18), and the event-triggered system states is obtained and transmitted to the ZOH through network. Then, critic-actor NNs in the RL algorithm will solve value function and OBCC policy values to achieve control of MASs and realize bipartite consensus control.…”
Section: Distributed Event-triggered Control and Stability Analysismentioning
confidence: 99%
“…Recently, the RL approach plays an effective role in seeking solutions for HJB equations in optimal control, because it can learn and make decisions based on actor-critic neural networks (NNs). For example, [18] proposes a simplified optimized tracking control strategy based on the RL algorithm. In fact, it utilizes NNs to approximate the solution of the HJB equation, which effectively addresses the issue of extensive computation in solving optimal control and tracking control problems.…”
Section: Introductionmentioning
confidence: 99%
“…Route planning for AGVs is a single‐source shortest route problem, which can be solved by various algorithms, such as Dijkstra's algorithm [1], neural networks [2], rapid random tree (RRT) [3], and probabilistic road map (PRM) [4]. Dijkstra's algorithm is a classical route search algorithm.…”
Section: Introductionmentioning
confidence: 99%