1996
DOI: 10.1016/0377-2217(94)00200-2
|View full text |Cite
|
Sign up to set email alerts
|

Conditional subgradient optimization — Theory and applications

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1

Citation Types

0
50
0
1

Year Published

2002
2002
2023
2023

Publication Types

Select...
4
4

Relationship

0
8

Authors

Journals

citations
Cited by 69 publications
(51 citation statements)
references
References 26 publications
0
50
0
1
Order By: Relevance
“…A (ε-)subgradient of f X is said to be a conditional (ε)-subgradient of f w.r.t. X [22,24] and can be used instead of g k to compute the search direction. It is well known that ∂ ε f X (x) ⊇ ∂ ε f (x)+∂I X (x) = ∂ ε f (x)+N k [18], where N k = N X (x k ) is the normal cone of X at x k ; thus, for iterates x k on the frontier of X (where N k = {0}), one may have, for a given g k produced by the "black box," multiple choices of vectors in the normal cone to produce a conditional subgradient g k to be used for computing the direction.…”
Section: Introductionmentioning
confidence: 99%
“…A (ε-)subgradient of f X is said to be a conditional (ε)-subgradient of f w.r.t. X [22,24] and can be used instead of g k to compute the search direction. It is well known that ∂ ε f X (x) ⊇ ∂ ε f (x)+∂I X (x) = ∂ ε f (x)+N k [18], where N k = N X (x k ) is the normal cone of X at x k ; thus, for iterates x k on the frontier of X (where N k = {0}), one may have, for a given g k produced by the "black box," multiple choices of vectors in the normal cone to produce a conditional subgradient g k to be used for computing the direction.…”
Section: Introductionmentioning
confidence: 99%
“…For a discussion of subgradient optimization strategies see Sarin and Karwan (1987), Sherali and Myers (1988), and Baker and Sheasby (1999). The use of conditional subgradient optimization is analyzed in Larsson et al (1996). The choice of the step size in subgradient optimization algorithms is addressed by Myers andby Poljak (1967, 1969), Held et al (1974) and Bazaraa and Sherali (1981).…”
Section: A Brief Review Of Lagrangean Relaxationmentioning
confidence: 99%
“…There are many other subgradient methods in the literature [4,5,6,23,24,26]. They increase the local computational times computing descent directions [6], or combining subgradients of previous iterations [4,5], or realizing projections onto general convex sets [23,24,26].…”
Section: Introductionmentioning
confidence: 99%
“…They increase the local computational times computing descent directions [6], or combining subgradients of previous iterations [4,5], or realizing projections onto general convex sets [23,24,26]. Experimental results with some of these methods show an improvement in performance compared to the subgradient method [23,26], but the subgradient method remains the widely used approach in the Lagrangean relaxation context.…”
Section: Introductionmentioning
confidence: 99%