Abstract:In this article we develop a general theory of exact parametric penalty functions for constrained optimization problems. The main advantage of the method of parametric penalty functions is the fact that a parametric penalty function can be both smooth and exact unlike the standard (i.e. non-parametric) exact penalty functions that are always nonsmooth. We obtain several necessary and/or sufficient conditions for the exactness of parametric penalty functions, and for the zero duality gap property to hold true f… Show more
“…If g is Gâteaux differentiable at x , then , where g ′ ( x ) is the Gâteaux derivative of g at x . Finally, if g is merely directionally differentiable at x , then The following theorem, which is a particular case of theorem 3.6 in Reference, contains simple sufficient conditions for the global exactness of the penalty function Φ λ ( x ). For any δ >0, define Ω δ ={ x ∈ A | φ ( x )< δ }.…”
Section: Exact Penalty Functions In Metric Spacesmentioning
confidence: 98%
“…Let us note that the rate of steepest descent of the function g at x is closely connected to the so‐called strong slope |∇ g |( x ) of g at x . See References . for some calculus rules for strong slope/rate of steepest descent and the ways one can estimate them in various particular cases.…”
Section: Exact Penalty Functions In Metric Spacesmentioning
confidence: 99%
“…Remark In the general case, under the assumptions of Theorem , nothing can be said about locally optimal solutions of the penalised problem /inf‐stationary points of Φ λ on A that do not belong to the set S λ ( c ). In order to ensure that the penalty function Φ λ is completely exact on A (ie, when c =+ ∞ ), one must suppose that the objective function is globally Lipschitz continuous, and there exists a >0 such that for all x ∈ A \Ω (see section 3.3 in Reference).…”
Section: Exact Penalty Functions In Metric Spacesmentioning
confidence: 99%
“…Remark Let us note that the assumptions of Theorem cannot be improved (see theorem 3.13 in Reference). On the other hand, the global exactness of the penalty function Φ λ can be proved under weaker assumptions on the penalty term φ .…”
Section: Exact Penalty Functions In Metric Spacesmentioning
confidence: 99%
“…In particular, one can apply such popular and efficient modern methods of nonsmooth optimisation as bundle methods, gradient sampling methods, nonsmooth quasi‐Newton methods, discrete gradient method (see also the works of Karmitsa et al), etc. Alternatively, one can utilise smoothing approximations of nonsmooth penalty functions as in References . or the smooth penalty function proposed by Huyer and Neumaier .…”
Summary
In this two‐part study, we develop a general approach to the design and analysis of exact penalty functions for various optimal control problems, including problems with terminal and state constraints, problems involving differential inclusions, and optimal control problems for linear evolution equations. This approach allows one to simplify an optimal control problem by removing some (or all) constraints of this problem with the use of an exact penalty function, thus allowing one to reduce optimal control problems to equivalent variational problems and apply numerical methods for solving, eg, problems without state constraints, to problems including such constraints, etc. In the first part of our study, we strengthen some existing results on exact penalty functions for optimisation problems in infinite dimensional spaces and utilise them to study exact penalty functions for free‐endpoint optimal control problems, which reduce these problems to equivalent variational ones. We also prove several auxiliary results on integral functionals and Nemytskii operators that are helpful for verifying the assumptions under which the proposed penalty functions are exact.
“…If g is Gâteaux differentiable at x , then , where g ′ ( x ) is the Gâteaux derivative of g at x . Finally, if g is merely directionally differentiable at x , then The following theorem, which is a particular case of theorem 3.6 in Reference, contains simple sufficient conditions for the global exactness of the penalty function Φ λ ( x ). For any δ >0, define Ω δ ={ x ∈ A | φ ( x )< δ }.…”
Section: Exact Penalty Functions In Metric Spacesmentioning
confidence: 98%
“…Let us note that the rate of steepest descent of the function g at x is closely connected to the so‐called strong slope |∇ g |( x ) of g at x . See References . for some calculus rules for strong slope/rate of steepest descent and the ways one can estimate them in various particular cases.…”
Section: Exact Penalty Functions In Metric Spacesmentioning
confidence: 99%
“…Remark In the general case, under the assumptions of Theorem , nothing can be said about locally optimal solutions of the penalised problem /inf‐stationary points of Φ λ on A that do not belong to the set S λ ( c ). In order to ensure that the penalty function Φ λ is completely exact on A (ie, when c =+ ∞ ), one must suppose that the objective function is globally Lipschitz continuous, and there exists a >0 such that for all x ∈ A \Ω (see section 3.3 in Reference).…”
Section: Exact Penalty Functions In Metric Spacesmentioning
confidence: 99%
“…Remark Let us note that the assumptions of Theorem cannot be improved (see theorem 3.13 in Reference). On the other hand, the global exactness of the penalty function Φ λ can be proved under weaker assumptions on the penalty term φ .…”
Section: Exact Penalty Functions In Metric Spacesmentioning
confidence: 99%
“…In particular, one can apply such popular and efficient modern methods of nonsmooth optimisation as bundle methods, gradient sampling methods, nonsmooth quasi‐Newton methods, discrete gradient method (see also the works of Karmitsa et al), etc. Alternatively, one can utilise smoothing approximations of nonsmooth penalty functions as in References . or the smooth penalty function proposed by Huyer and Neumaier .…”
Summary
In this two‐part study, we develop a general approach to the design and analysis of exact penalty functions for various optimal control problems, including problems with terminal and state constraints, problems involving differential inclusions, and optimal control problems for linear evolution equations. This approach allows one to simplify an optimal control problem by removing some (or all) constraints of this problem with the use of an exact penalty function, thus allowing one to reduce optimal control problems to equivalent variational problems and apply numerical methods for solving, eg, problems without state constraints, to problems including such constraints, etc. In the first part of our study, we strengthen some existing results on exact penalty functions for optimisation problems in infinite dimensional spaces and utilise them to study exact penalty functions for free‐endpoint optimal control problems, which reduce these problems to equivalent variational ones. We also prove several auxiliary results on integral functionals and Nemytskii operators that are helpful for verifying the assumptions under which the proposed penalty functions are exact.
The second part of our study is devoted to an analysis of the exactness of penalty functions for optimal control problems with terminal and pointwise state constraints. We demonstrate that with the use of the exact penalty function method one can reduce fixed-endpoint problems for linear time-varying systems and linear evolution equations with convex constraints on the control inputs to completely equivalent free-endpoint optimal control problems, if the terminal state belongs to the relative interior of the reachable set. In the nonlinear case, we prove that a local reduction of fixed-endpoint and variable-endpoint problems to equivalent free-endpoint ones is possible under the assumption that the linearized system is completely controllable, and point out some general properties of nonlinear systems under which a global reduction to equivalent free-endpoint problems can be achieved. In the case of problems with pointwise state inequality constraints, we prove that such problems for linear time-varying systems and linear evolution equations with convex state constraints can be reduced to equivalent problems without state constraints, provided one uses the L ∞ penalty term, and Slater's condition holds true, while for nonlinear systems a local reduction is possible, if a natural constraint qualification is satisfied. Finally, we show that the exact L p -penalization of state constraints with finite p is possible for convex problems, if Lagrange multipliers corresponding to the state constraints belong to L p ′ , where p ′ is the conjugate exponent of p, and for general nonlinear problems, if the cost functional does not depend on the control inputs explicitly.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.