2020
DOI: 10.48550/arxiv.2002.09526
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Stochastic Subspace Cubic Newton Method

Abstract: In this paper, we propose a new randomized second-order optimization algorithm-Stochastic Subspace Cubic Newton (SSCN)-for minimizing a high dimensional convex function f . Our method can be seen both as a stochastic extension of the cubically-regularized Newton method of Nesterov and Polyak (2006), and a second-order enhancement of stochastic subspace descent of Kozak et al. (2019). We prove that as we vary the minibatch size, the global convergence rate of SSCN interpolates between the rate of stochastic coo… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1

Citation Types

0
4
0

Year Published

2020
2020
2022
2022

Publication Types

Select...
2
1

Relationship

1
2

Authors

Journals

citations
Cited by 3 publications
(4 citation statements)
references
References 30 publications
0
4
0
Order By: Relevance
“…According to the estimate (18), in order to get ε-accuracy in function value, it is enough to perform…”
Section: Contracting-domain Newton Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…According to the estimate (18), in order to get ε-accuracy in function value, it is enough to perform…”
Section: Contracting-domain Newton Methodsmentioning
confidence: 99%
“…Later on, accelerated [27], adaptive [6,7] and universal [16,11,17] second-order schemes based on cubic regularization were developed. Randomized versions of Cubic Newton, suitable for solving high-dimensional problems were proposed in [12,18].…”
Section: Introductionmentioning
confidence: 99%
“…Randomized subspace/projections methods have recently attracted much interest for local or convex optimization problems; see for example, [30,33,38,42,44,57]; no low effective dimensionality assumption is made in these works. Finally, we note that the main step in our convergence analysis consists in deriving a lower bound on the probability that a random subspace of given dimension intersects a given set (the set of approximate global minimizers), which is an important problem in stochastic geometry, see, e.g., the extensive discussion by Oymak and Tropp [46].…”
Section: Introductionmentioning
confidence: 99%
“…Randomized subspace methods have recently attracted much interest for local or convex optimization problems; see for example, [41,36,28,31]; no low effective dimensionality assumption is made in these works. Finally, we note that the main step in our convergence analysis consists in deriving a lower bound on the probability that a random subspace of given dimension intersects a given set (the set of approximate global minimizers), which is an important problem in stochastic geometry, see, e.g., the extensive discussion by Oymak and Tropp [43].…”
Section: Introductionmentioning
confidence: 99%