2016 IEEE 55th Conference on Decision and Control (CDC) 2016
DOI: 10.1109/cdc.2016.7798265
|View full text |Cite
|
Sign up to set email alerts
|

Global convergence rate of incremental aggregated gradient methods for nonsmooth problems

Abstract: Abstract. We focus on the problem of minimizing the sum of smooth component functions (where the sum is strongly convex) and a non-smooth convex function, which arises in regularized empirical risk minimization in machine learning and distributed constrained optimization in wireless sensor networks and smart grids. We consider solving this problem using the proximal incremental aggregated gradient (PIAG) method, which at each iteration moves along an aggregated gradient (formed by incrementally updating gradie… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...

Citation Types

0
1
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
5

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(1 citation statement)
references
References 19 publications
0
1
0
Order By: Relevance
“…For example, averaging SA schemes achieve a rate of O M √ k , where M is an upper bound on the norm of the subgradient (see [21,22]). In past few years, fast incremental gradient methods with improved rates of convergence have been developed (see [5,9,26,29]). Of these, addressing the merely convex case, SAGA with averaging achieves a sublinear convergence rate O N k where N denotes the number of blocks, while in the presence of strong convexity, non-averaging variants of SAGA and IAG admit a linear convergence rate assuming that the function satisfies some smoothness conditions.…”
mentioning
confidence: 99%
“…For example, averaging SA schemes achieve a rate of O M √ k , where M is an upper bound on the norm of the subgradient (see [21,22]). In past few years, fast incremental gradient methods with improved rates of convergence have been developed (see [5,9,26,29]). Of these, addressing the merely convex case, SAGA with averaging achieves a sublinear convergence rate O N k where N denotes the number of blocks, while in the presence of strong convexity, non-averaging variants of SAGA and IAG admit a linear convergence rate assuming that the function satisfies some smoothness conditions.…”
mentioning
confidence: 99%