2022
DOI: 10.1109/tpami.2021.3112139
|View full text |Cite
|
Sign up to set email alerts
|

Improved Variance Reduction Methods for Riemannian Non-Convex Optimization

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
11
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
3

Relationship

2
5

Authors

Journals

citations
Cited by 11 publications
(11 citation statements)
references
References 31 publications
0
11
0
Order By: Relevance
“…holds, which violates (23). Here, we highlight B xy is the adjoint of B yx , and therefore, the eigenvalues…”
Section: Key Propositionsmentioning
confidence: 97%
See 3 more Smart Citations
“…holds, which violates (23). Here, we highlight B xy is the adjoint of B yx , and therefore, the eigenvalues…”
Section: Key Propositionsmentioning
confidence: 97%
“…Let L 0 , L 1 , L 2 > 0. [78,33,23]). A function h : M − → R satisfies the PL condition on Riemannian manifold if for any p ∈ M, there exists δ > 0 such that 1 2 gradh(p) The following lemma shows the connection between smoothness of a function on manifold and its Lipschitz Riemannian gradient, which is fundamental for convergence analysis.…”
Section: Riemannian Geodesic Convex Optimizationmentioning
confidence: 99%
See 2 more Smart Citations
“…Furthermore, a majority of the previous studies on SGD in a Riemannian space (see e.g. ; ; Tripuraneni et al (2018); Alimisis et al (2020); Han and Gao (2020)), are purely local in nature, because of the assumption that (θ n ) n∈N stays almost surely in a (fixed and deterministic) compact and geodesically convex subset of Θ. For example, note that all the convergence results derived in depend on the diameter of the compact in which (θ n ) n∈N is assumed to stay.…”
Section: Introductionmentioning
confidence: 99%