2001
DOI: 10.1007/978-1-4757-6594-6_11
|View full text |Cite
|
Sign up to set email alerts
|

Convergence Rate of Incremental Subgradient Algorithms

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

7
206
0

Year Published

2008
2008
2017
2017

Publication Types

Select...
4
4

Relationship

1
7

Authors

Journals

citations
Cited by 162 publications
(213 citation statements)
references
References 16 publications
7
206
0
Order By: Relevance
“…However, this unfortunately leads to the same slow O(1/ √ ) and O(1/ ) convergence rates of the sub-gradient method. But interestingly, if we still use a constant step-size at each iteration for the stochastic gradient method the algorithm is known to quickly reduce the initial error, even if it has a nonvanishing optimization error [24]. We have observed this for the stochastic gradient descent example in Figure 1.…”
Section: Algorithmmentioning
confidence: 80%
See 2 more Smart Citations
“…However, this unfortunately leads to the same slow O(1/ √ ) and O(1/ ) convergence rates of the sub-gradient method. But interestingly, if we still use a constant step-size at each iteration for the stochastic gradient method the algorithm is known to quickly reduce the initial error, even if it has a nonvanishing optimization error [24]. We have observed this for the stochastic gradient descent example in Figure 1.…”
Section: Algorithmmentioning
confidence: 80%
“…Similar to the coordinate descent methods, the crucial design problem in the stochastic gradient methods is the selection of the data points j at each iteration. Analogously, we obtain better convergence rates by choosing the j uniformly at random rather than cycling through the data [24].…”
Section: Algorithmmentioning
confidence: 97%
See 1 more Smart Citation
“…Without the presence of noise, the incremental method has been studied by Kibardin [14], Nedić and Bertsekas [18,19], Nedić, Bertsekas, and Borkar [17] (see also Nedić [20]), Ben-Tal, Margalit, and Nemirovski [1], and Kiwiel [15]. The incremental idea have been extended to min-max problems through the use of bundle methods by Gaudioso, Giallombardo, and Miglionico [11].…”
Section: Implications For Incremental K -Subgradient Methodsmentioning
confidence: 99%
“…However, it has a relatively weak convergence rate of O( √ ε) [109], i.e., reducing the distance between w cur and w * by a factor of ε can require O( 1 ε 2 ) iterations. Given that each iteration requires multiple costly steps of loss-augmented prediction, it is natural to look for efficient way to optimize Equation (6.2).…”
Section: Subgradient Descent Minimizationmentioning
confidence: 99%