A dynamic programming approach is considered for a class of minimum problems with impulses. The minimization domain consists of trajectories satisfying an ordinary differential equation whose right-hand side depends not only on a measurable control v but also on a second control u and on its time derivative d. For this reason, the control u, and the differential equation are called impulsive. The value function of the considered minimum problem turns out to depend on the time, the state, the u variable, and the variation allowed to the impulsive control. It is shown that the value function satisfies, in a generalized sense, a dynamic programming equation (DPE), which is obtained from a dynamic programming principle involving space-time trajectories. Moreover the value function is the unique map-solving equation (DPE) satisfying either an inequality condition or a supersolution condition at each point of the boundary. Incidentally this extends a result by Barren, Jensen, and Menaldi [Nonlinear Anal., 21 (1993), pp. 241-268], where the impulsive control is scalar monotone and the corresponding vector field is independent of the state variable. Next, a maximum principle is proved, and the well-known relationship between adjoint variables and value function is suitably extended to impulsive control systems. A fully elaborated example concludes the paper
After recalling the notion of L1 limit solution for a dynamics which is affine in the (unbounded) derivative of the control, we focus on the possible occurrence of the Lavrentiev phenomenon for a related optimal control problem. By this we mean the possibil- ity that the cost functional evaluated along L1 inputs (and the corresponding limit solutions) assumes values strictly smaller than the infimum over AC inputs. In fact, it turns out that no Lavrentiev phenomenon may take place in the unconstrained case, while the presence of an end-point constraint may give rise to an actual gap. We prove that a suitable transversality condition, here called Quick 1-Controllability, is sufficient for this gap to be avoided. Mean- while, we also investigate the issue of trajectories’ approximation through implementation of inputs with bounded variation
Optimal unbounded control problems with linear growth w.r.t. the control, both in the dynamics and in the cost, may fail to have minimizers in the class of absolutely continuous state trajectories. For this reason, extended versions of such problems have been investigated, in which the domain is extended to include possibly discontinuous state trajectories of bounded variation, and for which existence of minimizers is guaranteed. It is of interest to know whether the passage from the original optimal control problem to its extension introduces an infimum gap. This will reveal whether it is possible to approximate extended minimizers by absolutely continuous state trajectories, as might be required for engineering implementation, and whether numerical schemes might be ill-conditioned. This paper provides sufficient conditions under which there is no infimum gap, expressed in terms of normality of extremals. The link we establish between infimum gaps and normality gives insights into the infimum gap phenomenon. But, perhaps more importantly, it opens up a new approach to devising useful tests for the absence of infimum gaps, namely to supply verifiable sufficient conditions for normality of extremals. We give several examples of the use of this approach, and show that it leads to either new conditions, or improvement of known conditions, for no infimum gaps. We also give a criterion for non infimum gaps, which covers some problems where the normality condition is violated, illustrating that sufficient conditions of normality type, while covering many cases, are not necessary. ✩
We consider a control problem where the state must approach asymptotically a target C while paying an integral cost with a non-negative Lagrangian l. The dynamics f is just continuous, and no assumptions are made on the zero level set of the Lagrangian l. Through an inequality involving a positive number View the MathML sourcep¯0 and a Minimum Restraint FunctionU=U(x)U=U(x) – a special type of Control Lyapunov Function – we provide a condition implying that (i) the system is asymptotically controllable, and (ii) the value function is bounded by View the MathML sourceU/p¯0. The result has significant consequences for the uniqueness issue of the corresponding Hamilton–Jacobi equation. Furthermore it may be regarded as a first step in the direction of a feedback construction
In this paper we consider an impulsive extension of an optimal control problem with unbounded controls, subject to endpoint and state constraints. We show that the existence of an extended-sense minimizer that is a normal extremal for a constrained Maximum Principle ensures that there is no gap between the infima of the original problem and of its extension. Furthermore, we translate such relation into verifiable sufficient conditions for normality in the form of constraint and endpoint qualifications. Links between existence of an infimum gap and normality in impulsive control have previously been explored for problems without state constraints. This paper establishes such links in the presence of state constraints and of an additional ordinary control, for locally Lipschitz continuous data.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.