2017 IEEE 56th Annual Conference on Decision and Control (CDC) 2017
DOI: 10.1109/cdc.2017.8264027
|View full text |Cite
|
Sign up to set email alerts
|

Repetitive learning model predictive control: An autonomous racing example

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
32
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
6
1

Relationship

4
3

Authors

Journals

citations
Cited by 44 publications
(34 citation statements)
references
References 12 publications
1
32
0
Order By: Relevance
“…Learning has been used in conjunction with MPC to learn driving models (Lefevre et al, 2015;Lefèvre et al, 2016), driving dynamics for race cars operating at their handling limits (Drews et al, 2017;Rosolia et al, 2017), as well as to improve path tracking accuracy (Brunner, Rosolia, Gonzales, & Borrelli, 2017;Ostafew et al, 2015Ostafew et al, , 2016. These methods use learning mechanisms to identify nonlinear dynamics that are used in the MPC's trajectory cost function optimization.…”
Section: Learning Controllersmentioning
confidence: 99%
“…Learning has been used in conjunction with MPC to learn driving models (Lefevre et al, 2015;Lefèvre et al, 2016), driving dynamics for race cars operating at their handling limits (Drews et al, 2017;Rosolia et al, 2017), as well as to improve path tracking accuracy (Brunner, Rosolia, Gonzales, & Borrelli, 2017;Ostafew et al, 2015Ostafew et al, , 2016. These methods use learning mechanisms to identify nonlinear dynamics that are used in the MPC's trajectory cost function optimization.…”
Section: Learning Controllersmentioning
confidence: 99%
“…Moreover, in Figure 6 we reported the computational time. It is interesting to notice that on average the finite time optimal control problem (8) is solved in less then 10ms, whereas it took 90ms to solve the finite time optimal control problem associated with [20]. We underline that both strategies have been tested with a prediction horizon of N = 12 and a sampling time of 10Hz.…”
Section: Affine Time Varying Modelmentioning
confidence: 99%
“…Furthermore, we linearize the kinematic equations of motion to approximate the evolution of the vehicle's position as a function of the velocities. Conversely to our previous works [20], [21], this strategy allow us to reformulate the LMPC as a Quadratic Program (QP) which can be solved efficiently.…”
Section: Introductionmentioning
confidence: 99%
“…The input constraints are Finally, we underline that the autonomous racing problem is a repetitive task and the goal is not to steer the system to the origin. Therefore, we use the method from [18] to apply the proposed strategy to the autonomous racing repetitive control problem. In particular, we define the set of state beyond the finish line of the track of length L, X F = {x ∈ R 6 : e 5 x ≥ L} and we use the set X F to compute the cost associated with the stored trajectories For the first 29 laps of the experiment, we run the Learning Model Predictive Controller (LMPC) from [18] to learn a fast trajectory which drives the vehicle around the track.…”
Section: B Example Ii: Autonomous Racingmentioning
confidence: 99%
“…Therefore, we use the method from [18] to apply the proposed strategy to the autonomous racing repetitive control problem. In particular, we define the set of state beyond the finish line of the track of length L, X F = {x ∈ R 6 : e 5 x ≥ L} and we use the set X F to compute the cost associated with the stored trajectories For the first 29 laps of the experiment, we run the Learning Model Predictive Controller (LMPC) from [18] to learn a fast trajectory which drives the vehicle around the track. From the 30th lap, we run the local data-based policy (11) and (12) using the latest M = 8 laps and N = 10 stored data for each lap.…”
Section: B Example Ii: Autonomous Racingmentioning
confidence: 99%