The platform will undergo maintenance on Sep 14 at about 7:45 AM EST and will be unavailable for approximately 2 hours.
2002
DOI: 10.1007/978-1-4757-3532-1
|View full text |Cite
|
Sign up to set email alerts
|

Convexification and Global Optimization in Continuous and Mixed-Integer Nonlinear Programming

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
470
0
6

Year Published

2005
2005
2016
2016

Publication Types

Select...
5
2
2

Relationship

0
9

Authors

Journals

citations
Cited by 527 publications
(498 citation statements)
references
References 1 publication
0
470
0
6
Order By: Relevance
“…Definition 1 (Tawarmalani and Sahinidis, 2002) For any n ∈ N, any A ⊆ R n and any φ : A → R, a function φ ′ : conv A → R is called a convex extension of φ iff φ ′ is convex and ∀a ∈ A : φ ′ (a) = φ(a). Moreover, a function φ * * : conv A → R is called the tightest convex extension of φ iff φ * * is a convex extension of φ and, for every convex extension φ ′ of φ and for all a ∈ conv A:…”
Section: Tightest Convex Extensionsmentioning
confidence: 99%
“…Definition 1 (Tawarmalani and Sahinidis, 2002) For any n ∈ N, any A ⊆ R n and any φ : A → R, a function φ ′ : conv A → R is called a convex extension of φ iff φ ′ is convex and ∀a ∈ A : φ ′ (a) = φ(a). Moreover, a function φ * * : conv A → R is called the tightest convex extension of φ iff φ * * is a convex extension of φ and, for every convex extension φ ′ of φ and for all a ∈ conv A:…”
Section: Tightest Convex Extensionsmentioning
confidence: 99%
“…The snag is of course that very tight bounds are needed for a successful optimization, which is not the case in the presence of strong nonlinearities. See [43] or [14] for references on general under-and overestimation of functions. When binary variables enter in a nonlinear way into the right hand side function f (·), often simplifications are possible.…”
Section: Reformulations To Avoid Nonlinearitymentioning
confidence: 99%
“…However, working with linearizations of nonconvex equations can easily cut off global optimal points or lead to an infeasible relaxation. For such problems much effort is spend on finding good convex underestimators of nonconvex functions [2,53] since they allow us to generate a convex relaxation of the problem that can be solved efficiently. To further achieve convergence to a global optimum, convex relaxation-based methods are often embedded in a Branch and Bound framework [1].…”
Section: Power Outputmentioning
confidence: 99%
“…Note, that to handle a nonlinear objective function h 0 (x), one can minimize a new variable y under the additional constraint h 0 (x) ≤ y. LaGO requires procedures for the evaluation of function values, gradients, and Hessians. This restriction to "black-box functions" has the advantage that very general functions can be handled, but also the disadvantage that without insight into the algebraic structure of the functions h i (x) advanced reformulation and box reduction techniques (as in [1,40,53]) cannot be used and we are forced to use sampling methods in some components of LaGO.…”
Section: Problem Formulationmentioning
confidence: 99%