2021
DOI: 10.1109/lra.2021.3091019
|View full text |Cite
|
Sign up to set email alerts
|

Learning-based Robust Motion Planning With Guaranteed Stability: A Contraction Theory Approach

Abstract: This paper presents Learning-based Autonomous Guidance with RObustness and Stability guarantees (LAG-ROS), which provides machine learning-based nonlinear motion planners with formal robustness and stability guarantees, by designing a differential Lyapunov function using contraction theory. LAG-ROS utilizes a neural network to model a robust tracking controller independently of a target trajectory, for which we show that the Euclidean distance between the target and controlled trajectories is exponentially bou… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
19
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
3
3

Relationship

2
4

Authors

Journals

citations
Cited by 17 publications
(23 citation statements)
references
References 28 publications
0
19
0
Order By: Relevance
“…Although computationally tractable, it still has some limitations in that the problem size grows exponentially with the number of variables and basis functions [100]. Learning-based and data-driven control using contraction theory [14,15,101,102] has been developed to refine these ideas, using the high representational power of DNNs [103][104][105] and their scalable training realized by stochastic gradient descent [47,106].…”
Section: Construction Of Contraction Metrics (Sec 3-4)mentioning
confidence: 99%
See 4 more Smart Citations
“…Although computationally tractable, it still has some limitations in that the problem size grows exponentially with the number of variables and basis functions [100]. Learning-based and data-driven control using contraction theory [14,15,101,102] has been developed to refine these ideas, using the high representational power of DNNs [103][104][105] and their scalable training realized by stochastic gradient descent [47,106].…”
Section: Construction Of Contraction Metrics (Sec 3-4)mentioning
confidence: 99%
“…The major advantage of using contraction theory for learning-based and data-driven control is that, by regarding its internal learning error as an external disturbance, we can ensure the distance between the target and learned trajectories to be bounded exponentially with time as in the CV-STEM results [14,15,101,102], with its steadystate upper bound proportional to the learning error. Such robustness and incremental stability guarantees are useful for formally evaluating the performance of machine learning techniques such as reinforcement learning [107][108][109][110], imitation learning [111][112][113][114][115], or neural networks [103][104][105].…”
Section: Construction Of Contraction Metrics (Sec 3-4)mentioning
confidence: 99%
See 3 more Smart Citations