Autonomous optimization refers to the design of feedback controllers that steer a physical system to a steady state that solves a predefined, possibly constrained, optimization problem. As such, no exogenous control inputs such as setpoints or trajectories are required. Instead, these controllers are modeled after optimization algorithms that take the form of dynamical systems. The interconnection of this type of optimization dynamics with a physical system is however not guaranteed to be stable unless both dynamics act on sufficiently different timescales. In this paper, we quantify the required timescale separation and give prescriptions that can be directly used in the design of this type of feedback controllers. Using ideas from singular perturbation analysis we derive stability bounds for different feedback optimization schemes that are based on common continuoustime optimization schemes. In particular, we consider gradient descent and its variations, including projected gradient, and Newton gradient. We further give stability bounds for momentum methods and saddle-point flows interconnected with dynamical systems. Finally, we discuss how optimization algorithms like subgradient and accelerated gradient descent, while well-behaved in offline settings, are unsuitable for autonomous optimization due to their general lack of robustness.
We consider the problem of optimizing the steady state of a dynamical system in closed loop. Conventionally, the design of feedback optimization control laws assumes that the system is stationary. However, in reality, the dynamics of the (slow) iterative optimization routines can interfere with the (fast) system dynamics. We provide a study of the stability and convergence of these feedback optimization setups in closed loop with the underlying plant, via a custom-tailored singular perturbation analysis result. Our study is particularly geared towards applications in power systems and the question whether recently developed online optimization schemes can be deployed without jeopardizing dynamic system stability.
In this paper, we present a novel control scheme for feedback optimization. That is, we propose a discretetime controller that can steer a physical plant to the solution of a constrained optimization problem without numerically solving the problem. Our controller can be interpreted as a discretization of a continuous-time projected gradient flow. Compared to other schemes used for feedback optimization, such as saddle-point schemes or inexact penalty methods, our control approach combines several desirable properties: it asymptotically enforces constraints on the plant steadystate outputs, and temporary constraint violations can be easily quantified. Our scheme requires only reduced model information in the form of steady-state input-output sensitivities of the plant. Further, global convergence is guaranteed even for non-convex problems. Finally, our controller is straightforward to tune, since the step-size is the only tuning parameter.
Mathematical optimization is one of the cornerstones of modern engineering research and practice. Yet, throughout all application domains, mathematical optimization is, for the most part, considered to be a numerical discipline. Optimization problems are formulated to be solved numerically with specific algorithms running on microprocessors. An emerging alternative is to view optimization algorithms as dynamical systems. While this new perspective is insightful in itself, liberating optimization methods from specific numerical and algorithmic aspects opens up new possibilities to endow complex real-world systems with sophisticated self-optimizing behavior. Towards this goal, it is necessary to understand how numerical optimization algorithms can be converted into feedback controllers to enable robust "closed-loop optimization". In this article, we review several research streams that have been pursued in this direction, including extremum seeking and pertinent methods from model predictive and process control. However, our primary focus lies on recent methods under the name of "feedback-based optimization". This research stream studies control designs that directly implement optimization algorithms in closed loop with physical systems. Such ideas are finding widespread application in the design and retrofit of control protocols for communication networks and electricity grids. In addition to an overview over continuous-time dynamical systems for optimization, our particular emphasis in this survey lies on closed-loop stability as well as the enforcement of physical and operational constraints in closed-loop implementations. We further illustrate these methods in the context of classical problems, namely congestion control in communication networks and optimal frequency control in electricity grids, and we highlight one potential future application in the form of autonomous reserve dispatch in power systems.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.