2019
DOI: 10.29007/jz9w
|View full text |Cite
|
Sign up to set email alerts
|

Learning Stabilizable Dynamical Systems via Control Contraction Metrics

Abstract: We propose a novel framework for learning stabilizable nonlinear dynamical systems for continuous control tasks in robotics. The key idea is to develop a new control-theoretic regularizer for dynamics fitting rooted in the notion of stabilizability, which guarantees that the learned system can be accompanied by a robust controller capable of stabilizing any open-loop trajectory that the system may generate. By leveraging tools from contraction theory, statistical learning, and convex optimization, we provide a… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
8
0

Year Published

2020
2020
2022
2022

Publication Types

Select...
5
2
1

Relationship

1
7

Authors

Journals

citations
Cited by 11 publications
(8 citation statements)
references
References 26 publications
(83 reference statements)
0
8
0
Order By: Relevance
“…5) Computational Complexity: Since (a) and LAG-ROS (c) are implementable with one neural net evaluation at each t, their performance is also compared with (b) which requires solving motion planning problems to get its control input. Its time horizon is selected to make the trajectory optimization [42] solvable online considering the current computational power, for the sake of a fair comparison. We denote the computational time as ∆t in this section, and it should be less than the maximum control time interval ∆t max , i.e., ∆t ≤ ∆t max = 0.1(s).…”
Section: A Simulation Setupmentioning
confidence: 99%
“…5) Computational Complexity: Since (a) and LAG-ROS (c) are implementable with one neural net evaluation at each t, their performance is also compared with (b) which requires solving motion planning problems to get its control input. Its time horizon is selected to make the trajectory optimization [42] solvable online considering the current computational power, for the sake of a fair comparison. We denote the computational time as ∆t in this section, and it should be less than the maximum control time interval ∆t max , i.e., ∆t ≤ ∆t max = 0.1(s).…”
Section: A Simulation Setupmentioning
confidence: 99%
“…The kernel Hilbert spaces have also been used to define suitable domains for operators (Rosenfeld et al, 2019;Giannakis et al, 2019). In most cases, the kernel is taken off-the-shelf, as with Gaussian kernels in connection with Bayesian inference (Singh et al, 2018;Bertalan et al, 2019).…”
Section: Introductionmentioning
confidence: 99%
“…A preliminary version of this article was presented at WAFR 2018 (Singh et al, 2018). In this revised and extended version, we include the following additional contributions: (i) rigorous derivation of the stabilizability-regularized finite-dimensional optimization problem using RKHS theory and random matrix features; (ii) extensive additional numerical studies into the convergence behavior of the iterative algorithm and comparison with traditional ridge-regression techniques; and (iii) validation of the algorithm on a quadrotor testbed with partially closed control loops to emulate a planar quadrotor.…”
Section: Introductionmentioning
confidence: 99%