We consider the problem of optimizing the steady state of a dynamical system in closed loop. Conventionally, the design of feedback optimization control laws assumes that the system is stationary. However, in reality, the dynamics of the (slow) iterative optimization routines can interfere with the (fast) system dynamics. We provide a study of the stability and convergence of these feedback optimization setups in closed loop with the underlying plant, via a custom-tailored singular perturbation analysis result. Our study is particularly geared towards applications in power systems and the question whether recently developed online optimization schemes can be deployed without jeopardizing dynamic system stability.
The main challenge in controlling hybrid systems arises from having to consider an exponential number of sequences of future modes to make good long-term decisions. Model predictive control (MPC) computes a control action through a finite-horizon optimisation problem. A key ingredient in this problem is a terminal cost, to account for the system's evolution beyond the chosen horizon. A good terminal cost can reduce the horizon length required for good control action and is often tuned empirically by observing performance. We build on the idea of using N -step Q-functions (Q (N) ) in the MPC objective to avoid having to choose a terminal cost. We present a formulation incorporating the system dynamics and constraints to approximate the optimal Q (N) -function and algorithms to train the approximation parameters through an exploration of the state space. We test the control policy derived from the trained approximations on two benchmark problems through simulations and observe that our algorithms are able to learn good Q (N) -approximations for high dimensional hybrid systems based on a relatively small data-set. Finally, we compare our controller's performance against that of Hybrid MPC in terms of computation time and closed-loop cost.
The main challenge in controlling hybrid systems arises from having to consider an exponential number of sequences of future modes to make good long-term decisions. Model predictive control (MPC) computes a control action through a finite-horizon optimisation problem. A key ingredient in this problem is a terminal cost, to account for the system's evolution beyond the chosen horizon. A good terminal cost can reduce the horizon length required for good control action and is often tuned empirically by observing performance. We build on the idea of using N-step Q-functions (Q (N) ) in the MPC objective to avoid having to choose a terminal cost. We present a formulation incorporating the system dynamics and constraints to approximate the optimal Q (N) -function and algorithms to train the approximation parameters through an exploration of the state space. We test the control policy derived from the trained approximations on two benchmark problems through simulations and observe that our algorithms are able to learn good Q (N) -approximations for hybrid systems with dimensions of practical relevance based on a relatively small data-set. We compare our controller's performance against that of Hybrid MPC in terms of computation time and closed-loop costs.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.