Extremum seeking feedback is a powerful method to steer a dynamical system to an extremum of a partially or completely unknown map. It often requires advanced system-theoretic tools to understand the qualitative behavior of extremum seeking systems. In this paper, a novel interpretation of extremum seeking is introduced. We show that the trajectories of an extremum seeking system can be approximated by the trajectories of a system which involves certain Lie brackets of the vector fields of the extremum seeking system. It turns out that the Lie bracket system directly reveals the optimizing behavior of the extremum seeking system. Furthermore, we establish a theoretical foundation and prove that uniform asymptotic stability of the Lie bracket system implies practical uniform asymptotic stability of the corresponding extremum seeking system. We use the established results in order to prove local and semi-global practical uniform asymptotic stability of the extrema of a certain map for multi-agent extremum seeking systems.
In this paper, we describe a broad class of control functions for extremum seeking problems. We show that it unifies and generalizes existing extremum seeking strategies which are based on Lie bracket approximations, and allows to design new controls with favorable properties in extremum seeking and vibrational stabilization tasks. The second result of this paper is a novel approach for studying the asymptotic behavior of extremum seeking systems. It provides a constructive procedure for defining frequencies of control functions to ensure the practical asymptotic and exponential stability. In contrast to many known results, we also prove asymptotic and exponential stability in the sense of Lyapunov for the proposed class of extremum seeking systems under appropriate assumptions on the vector fields. * This work was supported in part by the Alexander von Humboldt Foundation and the Deutsche Forschungsgemeinschaft (EB 425/4-1). Corresponding author V. Grushkovskaya.
In this paper, we investigate the use of relaxed logarithmic barrier functions in the context of linear model predictive control. We present results that allow to guarantee asymptotic stability of the corresponding closed-loop system, and discuss further properties like performance and constraint satisfaction in dependence of the underlying relaxation. The proposed stabilizing MPC schemes are not necessarily based on an explicit terminal set or state constraint and allow to characterize the stabilizing control input sequence as the minimizer of a globally defined, continuously differentiable, and strongly convex function. The results are illustrated by means of a numerical example.
We address the observability problem for ensembles that are described by probability distributions. The problem is to reconstruct a probability distribution of the initial state from the time-evolution of the probability distribution of the output under a classical finite-dimensional linear system. We present two solutions to this problem, one based on formulating the problem as an inverse problem and the other one based on reconstructing all the moments of the distribution. The first approach leads us to a connection between the reconstruction problem and mathematical tomography problems. In the second approach we use the framework of tensor systems to describe the dynamics of the moments which leads to a more systems theoretic treatment of the reconstruction problem. Furthermore we show that both frameworks are inherently related. The appeal of having two dual viewpoints, the first being more geometric and the second one being more systems theoretic, is illuminated in several examples of theoretical or practical importance.
Summary
In this article, we consider extremum seeking problems for a general class of nonlinear dynamic control systems. The main result of the article is a broad family of control laws which optimize the steady‐state performance of the system. We prove practical asymptotic stability of the optimal steady‐state and, moreover, propose sufficient conditions for the asymptotic stability in the sense of Lyapunov. The results generalize and extend existing results which are based on Lie bracket approximations. In particular, our approach does not rely on singular perturbation theory, as commonly used in extremum seeking of nonlinear dynamic systems.
We consider the problem of analyzing and designing gradient-based discrete-time optimization algorithms for a class of unconstrained optimization problems having strongly convex objective functions with Lipschitz continuous gradient. By formulating the problem as a robustness analysis problem and making use of a suitable adaptation of the theory of integral quadratic constraints, we establish a framework that allows to analyze convergence rates and robustness properties of existing algorithms and enables the design of novel robust optimization algorithms with prespecified guarantees capable of exploiting additional structure in the objective function.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.