2020
DOI: 10.1109/lcsys.2020.2976311
|View full text |Cite
|
Sign up to set email alerts
|

On the Linear Convergence Rate of the Distributed Block Proximal Method

Abstract: The recently developed Distributed Block Proximal Method, for solving stochastic big-data convex optimization problems, is studied in this paper under the assumption of constant stepsizes and strongly convex (possibly non-smooth) local objective functions. This class of problems arises in many learning and classification problems in which, for example, strongly-convex regularizing functions are included in the objective function, the decision variable is extremely high dimensional, and large datasets are emplo… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...

Citation Types

0
0
0

Publication Types

Select...

Relationship

0
0

Authors

Journals

citations
Cited by 0 publications
references
References 22 publications
(29 reference statements)
0
0
0
Order By: Relevance

No citations

Set email alert for when this publication receives citations?