49th IEEE Conference on Decision and Control (CDC) 2010
DOI: 10.1109/cdc.2010.5717836
|View full text |Cite
|
Sign up to set email alerts
|

Convergence and convergence rate of stochastic gradient search in the case of multiple and non-isolated extrema

Abstract: Abstract. The asymptotic behavior of stochastic gradient algorithms is studied. Relying on results from differential geometry (Lojasiewicz gradient inequality), the single limit-point convergence of the algorithm iterates is demonstrated and relatively tight bounds on the convergence rate are derived. In sharp contrast to the existing asymptotic results, the new results presented here allow the objective function to have multiple and non-isolated minima. The new results also offer new insights into the asympto… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2014
2014
2017
2017

Publication Types

Select...
1
1

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(1 citation statement)
references
References 30 publications
0
1
0
Order By: Relevance
“…The fact that ∂ 2 u E (u * ) is not positive definite on the space U complicates the asymptotic convergence rate analysis for the gradient descent algorithm of Section 10 (see, for example, [30] and references therein). If the inclusion (92) is an equality, that is,…”
Section: Stepsize Selectionmentioning
confidence: 99%
“…The fact that ∂ 2 u E (u * ) is not positive definite on the space U complicates the asymptotic convergence rate analysis for the gradient descent algorithm of Section 10 (see, for example, [30] and references therein). If the inclusion (92) is an equality, that is,…”
Section: Stepsize Selectionmentioning
confidence: 99%