This paper presents a class of linear predictors for nonlinear controlled dynamical systems. The basic idea is to lift (or embed) the nonlinear dynamics into a higher dimensional space where its evolution is approximately linear. In an uncontrolled setting, this procedure amounts to numerical approximations of the Koopman operator associated to the nonlinear dynamics. In this work, we extend the Koopman operator to controlled dynamical systems and apply the Extended Dynamic Mode Decomposition (EDMD) to compute a finite-dimensional approximation of the operator in such a way that this approximation has the form of a linear controlled dynamical system. In numerical examples, the linear predictors obtained in this way exhibit a performance superior to existing linear predictors such as those based on local linearization or the so called Carleman linearization. Importantly, the procedure to construct these linear predictors is completely data-driven and extremely simple -it boils down to a nonlinear transformation of the data (the lifting) and a linear least squares problem in the lifted space that can be readily solved for large data sets. These linear predictors can be readily used to design controllers for the nonlinear dynamical system using linear controller design methodologies. We focus in particular on model predictive control (MPC) and show that MPC controllers designed in this way enjoy computational complexity of the underlying optimization problem comparable to that of MPC for a linear dynamical system with the same number of control inputs and the same dimension of the state-space. Importantly, linear inequality constraints on the state and control inputs as well as nonlinear constraints on the state can be imposed in a linear fashion in the proposed MPC scheme. Similarly, cost functions nonlinear in the state variable can be handled in a linear fashion. We treat both the full-state measurement case and the input-output case, as well as systems with disturbances / noise. Numerical examples (including a high-dimensional nonlinear PDE control) demonstrate the approach with the source code available online 2 .
We address the long-standing problem of computing the region of attraction (ROA) of a target set (typically a neighborhood of an equilibrium point) of a controlled nonlinear system with polynomial dynamics and semialgebraic state and input constraints. We show that the ROA can be computed by solving an infinite-dimensional convex linear programming (LP) problem over the space of measures. In turn, this problem can be solved approximately via a classical converging hierarchy of convex finite-dimensional linear matrix inequalities (LMIs). Our approach is genuinely primal in the sense that convexity of the problem of computing the ROA is an outcome of optimizing directly over system trajectories. The dual infinite-dimensional LP on nonnegative continuous functions (approximated by polynomial sum-of-squares) allows us to generate a hierarchy of semialgebraic outer approximations of the ROA at the price of solving a sequence of LMI problems with asymptotically vanishing conservatism. This sharply contrasts with the existing literature which follows an exclusively dual Lyapunov approach yielding either nonconvex bilinear matrix inequalities or conservative LMI conditions. The approach is simple and readily applicable as the outer approximations are the outcome of a single semidefinite program with no additional data required besides the problem description.
We address the long-standing problem of computing the region of attraction (ROA) of a target set (typically a neighborhood of an equilibrium point) of a controlled nonlinear system with polynomial dynamics and semialgebraic state and input constraints. We show that the ROA can be computed by solving an infinite-dimensional convex linear programming (LP) problem over the space of measures. In turn, this problem can be solved approximately via a classical converging hierarchy of convex finite-dimensional linear matrix inequalities (LMIs). Our approach is genuinely primal in the sense that convexity of the problem of computing the ROA is an outcome of optimizing directly over system trajectories. The dual infinite-dimensional LP on nonnegative continuous functions (approximated by polynomial sum-of-squares) allows us to generate a hierarchy of semialgebraic outer approximations of the ROA at the price of solving a sequence of LMI problems with asymptotically vanishing conservatism. This sharply contrasts with the existing literature which follows an exclusively dual Lyapunov approach yielding either nonconvex bilinear matrix inequalities or conservative LMI conditions. The approach is simple and readily applicable as the outer approximations are the outcome of a single semidefinite program with no additional data required besides the problem description.
Extended Dynamic Mode Decomposition (EDMD) [27] is an algorithm that approximates the action of the Koopman operator on an N -dimensional subspace of the space of observables by sampling at M points in the state space. Assuming that the samples are drawn either independently or ergodically from some measure µ, it was shown in [11] that, in the limit as M → ∞, the EDMD operator K N,M converges to K N , where K N is the L 2 (µ)-orthogonal projection of the action of the Koopman operator on the finite-dimensional subspace of observables. We show that, as N → ∞, the operator K N converges in the strong operator topology to the Koopman operator. This in particular implies convergence of the predictions of future values of a given observable over any finite time horizon, a fact important for practical applications such as forecasting, estimation and control. In addition, we show that accumulation points of the spectra of K N correspond to the eigenvalues of the Koopman operator with the associated eigenfunctions converging weakly to an eigenfunction of the Koopman operator, provided that the weak limit of the eigenfunctions is nonzero. As a by-product, we propose an analytic version of the EDMD algorithm which, under some assumptions, allows one to construct K N directly, without the use of sampling. Finally, under additional assumptions, we analyze convergence of K N,N (i.e., M = N ), proving convergence, along a subsequence, to weak eigenfunctions (or eigendistributions) related to the eigenmeasures of the Perron-Frobenius operator. No assumptions on the observables belonging to a finite-dimensional invariant subspace of the Koopman operator are required throughout.
We characterize the maximum controlled invariant (MCI) set for discrete-as well as continuous-time nonlinear dynamical systems as the solution of an infinite-dimensional linear programming problem. For systems with polynomial dynamics and compact semialgebraic state and control constraints, we describe a hierarchy of finite-dimensional linear matrix inequality (LMI) relaxations whose optimal values converge to the volume of the MCI set; dual to these LMI relaxations are sum-of-squares (SOS) problems providing a converging sequence of outer approximations to the MCI set. The approach is simple and readily applicable in the sense that the approximations are the outcome of a single semidefinite program with no additional input apart from the problem description. A number of numerical examples illustrate the approach. Introduction.Given a controlled dynamical system described by a differential (continuous-time) or difference (discrete-time) equation, its maximum controlled invariant (MCI) set is the set of all initial states that can be kept within a given constraint set ad infinitum using admissible control inputs. This set goes by many other names in the literature, e.g., viability kernel in viability theory [5], or (A, B)-invariant set in the linear case [13].Set invariance is a ubiquitous and essential concept in dynamical systems theory as far as both analysis and control synthesis is concerned. In particular, by its very definition, the MCI set determines fundamental limitations of a given control system with respect to constraint satisfaction. In addition, there is a very tight link between invariant sets and (control) Lyapunov functions. Indeed, sublevel sets of a Lyapunov function give rise to invariant sets. Conversely, at least in the linear case, any controlled invariant set gives rise to a control Lyapunov function, and therefore these sets can be readily used to design stabilizing control laws; see, e.g., [9] for a general treatment and, e.g., [17,26] for applications in model predictive control design.The problem of (maximum) controlled invariant set computation for discretetime systems has been a topic of active research for more than four decades. The central tool in this effort has been the contractive algorithm of [7] and its expansive counterpart [18]. For an exhaustive survey and historical remarks see the survey [9] and the book [12].
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.