2021
DOI: 10.48550/arxiv.2109.14152
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Lyapunov-stable neural-network control

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
19
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
2
1

Relationship

0
8

Authors

Journals

citations
Cited by 12 publications
(19 citation statements)
references
References 36 publications
0
19
0
Order By: Relevance
“…Optimization and verification techniques might also be useful for obtaining safety guarantees for ANNs in process control applications. For example, Dai et al (2021) guarantee Lyapunov stability of ANN controllers during training by also learning a Lyapunov function as an ANN, while Paulson and Mesbah (2020) propose a projection operator for guaranteeing feasibility and constraint satisfaction.…”
Section: How Restricted Data Challenges Are Addressed In the Literaturementioning
confidence: 99%
“…Optimization and verification techniques might also be useful for obtaining safety guarantees for ANNs in process control applications. For example, Dai et al (2021) guarantee Lyapunov stability of ANN controllers during training by also learning a Lyapunov function as an ANN, while Paulson and Mesbah (2020) propose a projection operator for guaranteeing feasibility and constraint satisfaction.…”
Section: How Restricted Data Challenges Are Addressed In the Literaturementioning
confidence: 99%
“…By combining these constraints with a mixed integer encoding of the control network π using the techniques of Tjeng et al [24], we obtain a set of mixed integer constraints corresponding to the abstract closed-loop system f . The optimization problem in (18) then becomes a MILP and can be solved using a MILP solver to obtain an over-approximated BP set.…”
Section: Nonlinear Dynamics: Over-approximation Of Bp Setsmentioning
confidence: 99%
“…The second source of over-approximation error is present in both algorithms and results from (18), which produces axis-aligned, hyper-rectangular BP sets. The use of hyper-rectanglar sets can result in large over-approximation errors if the true BP sets are not well-represented by axis-aligned hyper-rectangles.…”
Section: E Sources Of Over-approximation Errormentioning
confidence: 99%
“…Third, in general, the control action may not just be a function of the current state, and the coupling of system states at different time steps can be complicated. The first two issues will cause trouble for existing Lyapunov-based ROA analysis methods using semidefinite programming (Yin et al, 2021;Hu et al, 2020;Jin and Lavaei, 2020;Aydinoglu et al, 2021) or mixed-integer programs (Chen et al, 2020(Chen et al, , 2021Dai et al, 2021). Due to the last issue, the methods of Lyapunov neural networks (Richards et al, 2018;Chang et al, 2019; or other stability certificate learning methods (Kenanian et al, 2019;Giesl et al, 2020;Ravanbakhsh and Sankaranarayanan, 2019) may also be not applicable since these methods typically require the control action to depend on the current state.…”
Section: Introductionmentioning
confidence: 99%