2017
DOI: 10.1137/15m1053141
|View full text |Cite
|
Sign up to set email alerts
|

Stochastic Quasi-Newton Methods for Nonconvex Stochastic Optimization

Abstract: Abstract. In this paper we study stochastic quasi-Newton methods for nonconvex stochastic optimization, where we assume that only stochastic information of the gradients of the objective function is available via a stochastic first-order oracle (SF O). Firstly, we propose a general framework of stochastic quasi-Newton methods for solving nonconvex stochastic optimization. The proposed framework extends the classic quasi-Newton methods working in deterministic settings to stochastic settings, and we prove its a… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

4
187
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
5
2
1

Relationship

0
8

Authors

Journals

citations
Cited by 145 publications
(197 citation statements)
references
References 48 publications
4
187
0
Order By: Relevance
“…However, if we start with a stepsize small enough, on the basis of the aforementioned results, we can expect that the gradi ent estimates will be descent directions and we will make sufficient progress leading to convergence. Further justification can be made by comparing our algorithm and a similar approach discussed in [19]. In that paper, the authors prove global convergence of SGO for nonconvex problems while using a cyclic choice of a heuristic stepsize.…”
Section: Theoretical Justificationsmentioning
confidence: 92%
“…However, if we start with a stepsize small enough, on the basis of the aforementioned results, we can expect that the gradi ent estimates will be descent directions and we will make sufficient progress leading to convergence. Further justification can be made by comparing our algorithm and a similar approach discussed in [19]. In that paper, the authors prove global convergence of SGO for nonconvex problems while using a cyclic choice of a heuristic stepsize.…”
Section: Theoretical Justificationsmentioning
confidence: 92%
“…Recently, several stochastic quasi-Newton algorithms have been developed for large-scale machine learning problems: oLBFGS [25,19], RES [20], SDBFGS [30], SFO [26] and SQN [4]. These methods can be represented in the form of (2.2) by setting v k , p k = 0 and using a quasi-Newton approximation for the matrix H k .…”
Section: Stochastic Quasi-newton Methodsmentioning
confidence: 99%
“…In the deterministic optimization settings, quasi-Newton or Newton methods can achieve higher accuracy and faster convergence by utilizing the second-order information [8], [12]. For the stochastic regime, stochastic quasi-Newton's methods (SQN) have been extensively studied in [1]- [3], [8]- [13], [16], [54]. In particular, [16] has developed a stochastic variable-metric method with subsampled gradients.…”
Section: Introductionmentioning
confidence: 99%
“…Moreover, tackling non-convexity and ill-conditioning are two major challenges in stochastic nonconvex optimization problems. To this end, damped BFGS [8] and regularized BFGS [3] have been proposed to deal with the non-convexity and ill-conditioning of the stochastic optimization problem, respectively. In stochastic BFGS methods, the Hessian approximation matrices are ensured to be positive definite in strongly convex optimization problems [14].…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation