2021
DOI: 10.1007/s10107-020-01583-1
|View full text |Cite
|
Sign up to set email alerts
|

A hybrid stochastic optimization framework for composite nonconvex optimization

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

1
30
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
4
2
1

Relationship

0
7

Authors

Journals

citations
Cited by 27 publications
(39 citation statements)
references
References 30 publications
1
30
0
Order By: Relevance
“…where (i) follows from a similar line of arguments in (25). Then (13) follows from using (44) in (43).…”
Section: C2 Proof Of Lemma 5(b)mentioning
confidence: 99%
See 3 more Smart Citations
“…where (i) follows from a similar line of arguments in (25). Then (13) follows from using (44) in (43).…”
Section: C2 Proof Of Lemma 5(b)mentioning
confidence: 99%
“…In this paper, we propose GT-HSGD, a novel online variance-reduced method for decentralized non-convex optimization with stochastic first-order oracles (SFO). To achieve fast and robust performance, the GT-HSGD algorithm is built upon global gradient tracking [19,20] and a local hybrid stochastic gradient estimator [42][43][44] that can be considered as a convex combination of the vanilla stochastic gradient returned by the SFO and a SARAH-type variance-reduced stochastic gradient [45]. In the following, we emphasize the key advantages of GT-HSGD compared with the existing decentralized online (variance-reduced) approaches, from both theoretical and practical aspects.…”
Section: Our Contributionsmentioning
confidence: 99%
See 2 more Smart Citations
“…A standard (or vanilla) SGM often converges slowly. Several acceleration techniques have been used to improve its theoretical and/or empirical convergence speed (e.g., [3,15,24,62,65]) for solving convex or smooth nonconvex problems. However, for nonsmooth nonconvex problems, it appears that it is still unknown whether a proximal SGM or a stochastic subgradient method (SsGM) can still have guaranteed convergence if a certain acceleration technique is applied.…”
mentioning
confidence: 99%