The platform will undergo maintenance on Sep 14 at about 7:45 AM EST and will be unavailable for approximately 2 hours.
2021
DOI: 10.3390/sym14010031
|View full text |Cite
|
Sign up to set email alerts
|

Self-Optimizing Path Tracking Controller for Intelligent Vehicles Based on Reinforcement Learning

Abstract: The path tracking control system is a crucial component for autonomous vehicles; it is challenging to realize accurate tracking control when approaching a wide range of uncertain situations and dynamic environments, particularly when such control must perform as well as, or better than, human drivers. While many methods provide state-of-the-art tracking performance, they tend to emphasize constant PID control parameters, calibrated by human experience, to improve tracking accuracy. A detailed analysis shows th… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
3
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
2

Relationship

0
6

Authors

Journals

citations
Cited by 8 publications
(3 citation statements)
references
References 38 publications
0
3
0
Order By: Relevance
“…To solve the problem of feature representation and online learning capability in learning control of uncertain dynamic systems, a multicore online RL method for path tracking control was proposed in the literature [24], where a multicore feature learning framework was designed based on pairwise heuristic planning, and simulations under S-curve and urban road conditions verified that the controller has better tracking accuracy and stability performance than LQR controller and PP controller.Ma et al combined RL and PID were combined to propose a self-seeking optimal path tracking control based on the interactive learning mechanism of the RL framework to achieve online optimization of the PID control parameters, and the simulation and real vehicle tests proved that the control method has better tracking performance in high-speed conditions (maximum speed above 100 km/h) [104].…”
Section: Reinforcement Learningmentioning
confidence: 99%
“…To solve the problem of feature representation and online learning capability in learning control of uncertain dynamic systems, a multicore online RL method for path tracking control was proposed in the literature [24], where a multicore feature learning framework was designed based on pairwise heuristic planning, and simulations under S-curve and urban road conditions verified that the controller has better tracking accuracy and stability performance than LQR controller and PP controller.Ma et al combined RL and PID were combined to propose a self-seeking optimal path tracking control based on the interactive learning mechanism of the RL framework to achieve online optimization of the PID control parameters, and the simulation and real vehicle tests proved that the control method has better tracking performance in high-speed conditions (maximum speed above 100 km/h) [104].…”
Section: Reinforcement Learningmentioning
confidence: 99%
“…Control algorithms, as the core component of path tracking control systems, have been the focus of most researchers, who aim to enhance the precision of path tracking control and its robustness. Currently, widely used control algorithms include PID control [7][8][9], fuzzy control [10][11][12], model predictive control [13][14][15], and sliding mode control [16][17][18]. In [8], the authors proposed a steering method that integrates reinforcement learning with traditional PID controllers.…”
Section: Introductionmentioning
confidence: 99%
“…Currently, widely used control algorithms include PID control [7][8][9], fuzzy control [10][11][12], model predictive control [13][14][15], and sliding mode control [16][17][18]. In [8], the authors proposed a steering method that integrates reinforcement learning with traditional PID controllers. This approach employs an RL framework with interactive learning mechanisms, enabling adaptive adjustment of the PID control parameters and maintaining excellent tracking accuracy even on complex trajectories.…”
Section: Introductionmentioning
confidence: 99%