2018
DOI: 10.1142/s0219530518500082
|View full text |Cite
|
Sign up to set email alerts
|

Faster convergence of a randomized coordinate descent method for linearly constrained optimization problems

Abstract: The problem of minimizing a separable convex function under linearly coupled constraints arises from various application domains such as economic systems, distributed control, and network flow. The main challenge for solving this problem is that the size of data is very large, which makes usual gradient-based methods infeasible. Recently, Necoara, Nesterov, and Glineur [8] proposed an efficient randomized coordinate descent method to solve this type of optimization problems and presented an appealing converg… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...

Citation Types

0
1
0

Year Published

2019
2019
2022
2022

Publication Types

Select...
5

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(1 citation statement)
references
References 29 publications
0
1
0
Order By: Relevance
“…Based on the Stochastic Gradient Descent Method (see e.g. [23,29,48]), the integral in (1.4) is replaced by the value V (f t (x t ) − y t )K xt (•), the kernel-based online learning algorithm is given by (see e.g. [58])…”
mentioning
confidence: 99%
“…Based on the Stochastic Gradient Descent Method (see e.g. [23,29,48]), the integral in (1.4) is replaced by the value V (f t (x t ) − y t )K xt (•), the kernel-based online learning algorithm is given by (see e.g. [58])…”
mentioning
confidence: 99%