We study a class of optimization problems in which the objective function is given by the sum of a differentiable but possibly nonconvex component and a nondifferentiable convex regularization term. We introduce an auxiliary variable to separate the objective function components and utilize the Moreau envelope of the regularization term to derive the proximal augmented Lagrangian -a continuously differentiable function obtained by constraining the augmented Lagrangian to the manifold that corresponds to the explicit minimization over the variable in the nonsmooth term. The continuous differentiability of this function with respect to both primal and dual variables allows us to leverage the method of multipliers (MM) to compute optimal primaldual pairs by solving a sequence of differentiable problems. The MM algorithm is applicable to a broader class of problems than proximal gradient methods and it has stronger convergence guarantees and a more refined step-size update rules than the alternating direction method of multipliers. These features make it an attractive option for solving structured optimal control problems. We also develop an algorithm based on the primal-descent dual-ascent gradient method and prove global (exponential) asymptotic stability when the differentiable component of the objective function is (strongly) convex and the regularization term is convex. Finally, we identify classes of problems for which the primal-dual gradient flow dynamics are convenient for distributed implementation and compare/contrast our framework to the existing approaches. I. INTRODUCTIONWe study a class of composite optimization problems in which the objective function is a sum of a differentiable but possibly nonconvex component and a convex nondifferentiable component. Problems of this form are encountered in diverse fields including compressive sensing [1], machine learning [2], statistics [3], image processing [4], and control [5]. In feedback synthesis, they typically arise when a traditional performance metric (such as the H2 or H∞ norm) is augmented with a regularization function to promote certain structural properties in the optimal controller. For example, the 1 norm and the nuclear norm are commonly used nonsmooth convex regularizers that encourage sparse and low-rank optimal solutions, respectively.The lack of a differentiable objective function precludes the use of standard descent methods for smooth optimization. Proximal gradient methods [6] and their accelerated variants [7] generalize gradient descent, but typically require the nonsmooth term to be separable over the optimization variable. Furthermore, standard acceleration techniques are not well-suited for problems with constraint sets that do not admit an easy projection (e.g., closed-loop stability).An alternative approach is to split the smooth and nonsmooth components in the objective function over separate variables which are coupled via an equality constraint. Such a reformulation facilitates the use of the alternating direction method of multi...
We consider the problem of the optimal selection of a subset of available sensors or actuators in large-scale dynamical systems. By replacing a combinatorial penalty on the number of sensors or actuators with a convex sparsitypromoting term, we cast this problem as a semidefinite program. The solution of the resulting convex optimization problem is used to select sensors (actuators) in order to gracefully degrade performance relative to the optimal Kalman filter (Linear Quadratic Regulator) that uses all available sensing (actuating) capabilities. We employ the alternating direction method of multipliers to develop a customized algorithm that is well-suited for large-scale problems. Our algorithm scales better than standard SDP solvers with respect to both the state dimension and the number of available sensors or actuators.Index Terms-Actuator and sensor selection, alternating direction method of multipliers, convex optimization, semidefinite programming, sparsity-promoting estimation and control.
This review article describes the design of static controllers that achieve an optimal tradeoff between closed-loop performance and controller structure. Our methodology consists of two steps. First, we identify controller structure by incorporating regularization functions into the optimal control problem and, second, we optimize the controller over the identified structure. For large-scale networks of dynamical systems, the desired structural property is captured by limited information exchange between physical and controller layers and the regularization term penalizes the number of communication links. Although structured optimal control problems are, in general, nonconvex, we identify classes of convex problems that arise in the design of symmetric systems, undirected consensus and synchronization networks, optimal selection of sensors and actuators, and decentralized control of positive systems. Examples of consensus networks, drug therapy design, sensor selection in flexible wing aircrafts, and optimal wide-area control of power systems are provided to demonstrate the effectiveness of the framework.
Several problems in modeling and control of stochastically-driven dynamical systems can be cast as regularized semi-definite programs. We examine two such representative problems and show that they can be formulated in a similar manner. The first, in statistical modeling, seeks to reconcile observed statistics by suitably and minimally perturbing prior dynamics. The second seeks to optimally select a subset of available sensors and actuators for control purposes. To address modeling and control of large-scale systems we develop a unified algorithmic framework using proximal methods. Our customized algorithms exploit problem structure and allow handling statistical modeling, as well as sensor and actuator selection, for substantially larger scales than what is amenable to current general-purpose solvers. We establish linear convergence of the proximal gradient algorithm, draw contrast between the proposed proximal algorithms and alternating direction method of multipliers, and provide examples that illustrate the merits and effectiveness of our framework. Index TermsActuator selection, sensor selection, sparsity-promoting estimation and control, method of multipliers, nonsmooth convex optimization, proximal algorithms, regularization for design, semi-definite programming, structured covariances. I. INTRODUCTIONConvex optimization has had tremendous impact on many disciplines, including system identification and control design [1]- [7]. The forefront of research points to broadening the range of applications as well as sharpening the effectiveness of algorithms in terms of speed and scalability. The present paper focuses on two representative control problems, statistical control-oriented modeling and sensor/actuator selection, that are cast as convex programs. A range of modern applications require addressing these over increasingly large parameter spaces, placing them outside the reach of standard solvers. A contribution of the paper is to formulate such problems as regularized semi-definite programs (SDPs) and to develop customized optimization algorithms that scale favorably with size.Modeling is often seen as an inverse problem where a search in parameter space aims to find a parsimonious representation of data. For example, in the control-oriented modeling of fluid flows, it is of interest to improve upon dynamical equations arising from first-principles (e.g., linearized Navier-Stokes equations), in order to accurately 2 replicate observed statistical features that are estimated from data. To this end, a perturbation of the prior model can be seen as a feedback gain that results in dynamical coupling between a suitable subset of parameters [8], [9].On the flip side, active control of large-scale and distributed systems requires judicious placement of sensors and actuators which again can be viewed as the selection of a suitable feedback or Kalman gain. In either modeling or control, the selection of such gain matrices must be guided by optimality criteria as well as simplicity (low rank or sparse architectur...
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.