2016
DOI: 10.48550/arxiv.1605.06892
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Accelerated Randomized Mirror Descent Algorithms For Composite Non-strongly Convex Optimization

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
13
0

Year Published

2016
2016
2021
2021

Publication Types

Select...
6
2

Relationship

0
8

Authors

Journals

citations
Cited by 11 publications
(14 citation statements)
references
References 20 publications
1
13
0
Order By: Relevance
“…2. Our algorithm also achieves the optimal convergence rate O(1/T 2 ) for non-strongly convex functions as in [48], [49]. Fig.…”
Section: Equivalent To Its Momentum Accelerated Variantmentioning
confidence: 75%
See 1 more Smart Citation
“…2. Our algorithm also achieves the optimal convergence rate O(1/T 2 ) for non-strongly convex functions as in [48], [49]. Fig.…”
Section: Equivalent To Its Momentum Accelerated Variantmentioning
confidence: 75%
“…More recently, many acceleration techniques were proposed to further speed up the stochastic variance reduced methods mentioned above. These techniques mainly include the Nesterov's acceleration techniques in [25], [39], [40], [41], [42], reducing the number of gradient calculations in early iterations [36], [43], [44], the projection-free property of the conditional gradient method (also known as the Frank-Wolfe algorithm [45]) as in [46], the stochastic sufficient decrease technique [47], and the momentum acceleration tricks in [36], [48], [49]. [40] proposed an accelerating Catalyst framework and achieved the oracle complexity of O((n + nL/µ) log(L/µ) log(1/ )) for strongly convex problems.…”
Section: Accelerated Sgdmentioning
confidence: 99%
“…Recently momentum or Nesterov [23] technique has been successfully combined with SVRG to achieve faster convergence in real-world applications [4,12,20]. In this section, we propose an accelerated method named ALPC-SVRG, based on Katyusha momentum [4], to obtain both faster running speed and high communication-efficiency.…”
Section: Accelerated Low-precision Algorithmmentioning
confidence: 99%
“…When the objective function is smooth, these methods can achieve optimal O(1/K 2 ) convergence rate. Recently, Allen-Zhu (2017) and Hien et al (2016) propose optimal O(1/K 2 ) algorithms for general convex problems, named Katyusha and ASMD, respectively. For σ-strongly convex problems, Katyusha also meets the optimal O((n + nL/σ) log 1 )) rate.…”
Section: Accelerated Stochastic Gradient Algorithmsmentioning
confidence: 99%