“…Definition 1 (Tawarmalani and Sahinidis, 2002) For any n ∈ N, any A ⊆ R n and any φ : A → R, a function φ ′ : conv A → R is called a convex extension of φ iff φ ′ is convex and ∀a ∈ A : φ ′ (a) = φ(a). Moreover, a function φ * * : conv A → R is called the tightest convex extension of φ iff φ * * is a convex extension of φ and, for every convex extension φ ′ of φ and for all a ∈ conv A:…”
Regularized empirical risk minimization with constrained labels (in contrast to fixed labels) is a remarkably general abstraction of learning. For common loss and regularization functions, this optimization problem assumes the form of a mixed integer program (MIP) whose objective function is non-convex. In this form, the problem is resistant to standard optimization techniques. We construct MIPs with the same solutions whose objective functions are convex. Specifically, we characterize the tightest convex extension of the objective function, given by the Legendre-Fenchel biconjugate. Computing values of this tightest convex extension is NP-hard. However, by applying our characterization to every function in an additive decomposition of the objective function, we obtain a class of looser convex extensions that can be computed efficiently. For some decompositions, common loss and regularization functions, we derive a closed form.
“…Definition 1 (Tawarmalani and Sahinidis, 2002) For any n ∈ N, any A ⊆ R n and any φ : A → R, a function φ ′ : conv A → R is called a convex extension of φ iff φ ′ is convex and ∀a ∈ A : φ ′ (a) = φ(a). Moreover, a function φ * * : conv A → R is called the tightest convex extension of φ iff φ * * is a convex extension of φ and, for every convex extension φ ′ of φ and for all a ∈ conv A:…”
Regularized empirical risk minimization with constrained labels (in contrast to fixed labels) is a remarkably general abstraction of learning. For common loss and regularization functions, this optimization problem assumes the form of a mixed integer program (MIP) whose objective function is non-convex. In this form, the problem is resistant to standard optimization techniques. We construct MIPs with the same solutions whose objective functions are convex. Specifically, we characterize the tightest convex extension of the objective function, given by the Legendre-Fenchel biconjugate. Computing values of this tightest convex extension is NP-hard. However, by applying our characterization to every function in an additive decomposition of the objective function, we obtain a class of looser convex extensions that can be computed efficiently. For some decompositions, common loss and regularization functions, we derive a closed form.
“…The snag is of course that very tight bounds are needed for a successful optimization, which is not the case in the presence of strong nonlinearities. See [43] or [14] for references on general under-and overestimation of functions. When binary variables enter in a nonlinear way into the right hand side function f (·), often simplifications are possible.…”
Section: Reformulations To Avoid Nonlinearitymentioning
“…However, working with linearizations of nonconvex equations can easily cut off global optimal points or lead to an infeasible relaxation. For such problems much effort is spend on finding good convex underestimators of nonconvex functions [2,53] since they allow us to generate a convex relaxation of the problem that can be solved efficiently. To further achieve convergence to a global optimum, convex relaxation-based methods are often embedded in a Branch and Bound framework [1].…”
Section: Power Outputmentioning
confidence: 99%
“…Note, that to handle a nonlinear objective function h 0 (x), one can minimize a new variable y under the additional constraint h 0 (x) ≤ y. LaGO requires procedures for the evaluation of function values, gradients, and Hessians. This restriction to "black-box functions" has the advantage that very general functions can be handled, but also the disadvantage that without insight into the algebraic structure of the functions h i (x) advanced reformulation and box reduction techniques (as in [1,40,53]) cannot be used and we are forced to use sampling methods in some components of LaGO.…”
Summary. This paper focuses on the optimization of the design and operation of combined heat and power plants (cogeneration plants). Due to the complexity of such an optimization task, conventional optimization methods consider only one operation point that is usually the full-load case. However, the frequent changes in demand lead to operation in several partial-load conditions. To guarantee a technically feasible and economically sound operation, we present a mathematical programming formulation of a model that considers the partial-load operation already in the design phase of the plant. This leads to a nonconvex mixed-integer nonlinear program (MINLP) due to discrete decisions in the design phase and discrete variables and nonlinear equations describing the thermodynamic status and behavior of the plant. The model is solved using an extended Branch and Cut algorithm that is implemented in the solver LaGO. We describe conventional optimization approaches and show that without consideration of different operation points, a flexible operation of the plant may be impossible. Further, we address the problem associated with the uncertain cost functions for plant components.Keywords. cogeneration plant, partial load performance, design optimization, cost minimization, nonconvex mixed-integer nonlinear programming, branch and cut
IntroductionIn deregulated energy markets the optimization of the design and operation of energy conversion plants becomes increasingly important. To reduce the product cost during the entire operation time of a plant, both selection of an optimal plant structure and selection of optimal operating parameters in different load situations are necessary. Several design optimization methods were developed and applied to energy conversion systems in the past, e.g.,
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.