2022
DOI: 10.48550/arxiv.2204.00318
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Towards gain tuning for numerical KKL observers

Abstract: This paper presents a first step towards tuning observers for nonlinear systems. Relying on recent results around Kazantzis-Kravaris/Luenberger (KKL) observers, we propose to design a family of observers parametrized by the cutoff frequency of a linear filter. We use neural networks to learn the mapping between the observer and the nonlinear system as a function of this frequency, and present a novel method to sample the state-space efficiently for nonlinear regression. We then propose a criterion related to n… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2023
2023
2023
2023

Publication Types

Select...
1
1

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(3 citation statements)
references
References 17 publications
0
3
0
Order By: Relevance
“…The proposed approach can be used to determine empirically the class of (distributional) indistinguishability of any state for an arbitrary nonlinear system, and to observe the continuous increase in relative distinguishability as we get farther from that class. We illustrate this with an undamped, unforced Duffing oscillator, often used in nonlinear observer design [26,27]:…”
Section: B Analyzing Observability In the State Spacementioning
confidence: 99%
“…The proposed approach can be used to determine empirically the class of (distributional) indistinguishability of any state for an arbitrary nonlinear system, and to observe the continuous increase in relative distinguishability as we get farther from that class. We illustrate this with an undamped, unforced Duffing oscillator, often used in nonlinear observer design [26,27]:…”
Section: B Analyzing Observability In the State Spacementioning
confidence: 99%
“…Yet solving this parametric PDE is still difficult and is further complicated by the differentiation. The neural network-based approaches proposed in existing works, [23][24][25] although built on the sound basis that neural networks act as universal approximators, theoretically are not exempt from the nonconvexity of training problems and local solutions.…”
Section: Kkl Observer and Its Existencementioning
confidence: 99%
“…To avoid solving the PDEs, Ramos et al 23 proposed to train a neural network that represents the inverse transformation from immersed states to the actual states. Buisson-Fenet et al 24 further considered the tuning of poles in the embedded linear dynamics along with the neural network training. A more sophisticated approach by Niazi et al 25 adopted the idea of physics-informed neural networks and used two neural networks-one for the immersion and one for its inverse; the loss metric for their training includes a state reconstruction error and a prediction error of the embedding.…”
Section: Introductionmentioning
confidence: 99%