2020
DOI: 10.1007/s10107-020-01505-1
|View full text |Cite
|
Sign up to set email alerts
|

Adaptive regularization with cubics on manifolds

Abstract: Adaptive regularization with cubics (ARC) is an algorithm for unconstrained, nonconvex optimization. Akin to the trust-region method, its iterations can be thought of as approximate, safe-guarded Newton steps. For cost functions with Lipschitz continuous Hessian, ARC has optimal iteration complexity, in the sense that it produces an iterate with gradient smaller than ε in O(1/ε 1.5 ) iterations. For the same price, it can also guarantee a Hessian with smallest eigenvalue larger than − √ ε. In this paper, we st… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1

Citation Types

2
33
0
1

Year Published

2020
2020
2024
2024

Publication Types

Select...
6
2

Relationship

1
7

Authors

Journals

citations
Cited by 30 publications
(36 citation statements)
references
References 40 publications
2
33
0
1
Order By: Relevance
“…In fact, this agrees with the typical restricted Lipschitz gradient condition for the pullback fx (•) = f (x + •) : F x → R, employing the obvious retraction; see [3,17]. ✸ Remark 3.…”
Section: Assumption 2 (Full Rank) the Image Im(a) Of The Operator A S...supporting
confidence: 79%
“…In fact, this agrees with the typical restricted Lipschitz gradient condition for the pullback fx (•) = f (x + •) : F x → R, employing the obvious retraction; see [3,17]. ✸ Remark 3.…”
Section: Assumption 2 (Full Rank) the Image Im(a) Of The Operator A S...supporting
confidence: 79%
“…We now state our main complexity result for Algorithm 2.2. The proof follows Theorem 3 in [4], which assumes that the objective function is twice differentiable on M. However, the Riemannian gradient mapping of Ψ is only semismooth. To upper bound grad Ψ(R l+1 ) F , we compare the difference between grad Ψ(R l ) and grad Ψ(R l+1 ) from different tangent space in the Euclidean sense.…”
Section: 6)mentioning
confidence: 94%
“…According to Assumption 1.A, it is easy to verify that Ψ is bounded from below by a finite constant Ψ low . The lemma below following from [4] shows that the sum of U l is bounded above by some constant. The proof can be found in Appendix SM2.7.…”
Section: 6)mentioning
confidence: 99%
See 1 more Smart Citation
“…在二阶算法方面, Riemann 信赖域算法是一个常用的方法. 最近, 文献[161] 提出了流 形上的自适应正则化 Newton 算法, 文献[165,166] 将 3 次正则化方法推广到了流形优化中, 文献[167] 对正交约束问题设计了一种结构拟 Newton 算法.…”
unclassified