Two well known approaches to nonlinear control involve the use of control Lyapunov functions (CLFs) and receding horizon control (RHC), also known as model predictive control (MPC). The on‐line Euler‐Lagrange computation of receding horizon control is naturally viewed in terms of optimal control, whereas researchers in CLF methods have emphasized such notions as inverse optimality. We focus on a CLF variation of Sontag's formula, which also results from a special choice of parameters in the so‐called pointwise minnorm formulation. Viewed this way, CLF methods have direct connections with the Hamilton‐Jacobi‐Bellman formulation of optimal control. A single example is used to illustrate the various limitations of each approach. Finally, we contrast the CLF and receding horizon points of view, arguing that their strengths are complementary and suggestive of new ideas and opportunities for control design. The presentation is tutorial, emphasizing concepts and connections over details and technicalities.
Abstract-Control Lyapunov functions (CLF's) are used in conjunction with receding horizon control (RHC) to develop a new class of receding horizon control schemes. In the process, strong connections between the seemingly disparate approaches are revealed, leading to a unified picture that ties together the notions of pointwise min-norm, receding horizon, and optimal control. This framework is used to develop a CLF based receding horizon scheme, of which a special case provides an appropriate extension of Sontag's formula. The scheme is first presented as an idealized continuous-time receding horizon control law. The issue of implementation under discrete-time sampling is then discussed as a modification. These schemes are shown to possess a number of desirable theoretical and implementation properties. An example is provided, demonstrating their application to a nonlinear control problem. Finally, stronger connections to both optimal and pointwise min-norm control are proved in the Appendix under more restrictive technical conditions. Index Terms-Control Lyapunov functions, nonlinear optimal control, predictive control, receding horizon control.
We present a new approach to the stability analysis of nite receding horizon control applied to constrained linear systems. By relating the nal predicted state to the current state through a bound on the terminal cost, it is shown that knowledge of upper and lower bounds for the nite horizon costs are su cient to determine the stability of a receding horizon controller. This analysis is valid for receding horizon schemes with arbitrary positive-de nite terminal weights, and does not rely on the use of stabilizing constraints. The result is a computable test for stability, and two simple examples are used to illustrate its application.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.