2014 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) 2014
DOI: 10.1109/icassp.2014.6854999
|View full text |Cite
|
Sign up to set email alerts
|

Flexible parallel algorithms for big data optimization

Abstract: We propose a decomposition framework for the parallel optimization of the sum of a differentiable function and a (block) separable nonsmooth, convex one. The latter term is typically used to enforce structure in the solution as, for example, in LASSO problems. Our framework is very flexible and includes both fully parallel Jacobi schemes and Gauss-Seidel (Southwell-type) ones, as well as virtually all possibilities in between (e.g., gradient- or Newton-type methods) with only a subset of variables updated at e… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
54
0
1

Year Published

2014
2014
2021
2021

Publication Types

Select...
4
3
1

Relationship

0
8

Authors

Journals

citations
Cited by 26 publications
(55 citation statements)
references
References 31 publications
0
54
0
1
Order By: Relevance
“…Outros estudos recentes na área envolvem o uso da Computação Paralela em Nuvem (Ekanayake, Qiu, Gunarathne, Beason & Fox, 2010), criação de linguagens de domínio específico para aplicações paralelas (Janjic et al, 2016), processamento digital de imagens (Olmedo, De La Calleja, Benitez, & Medina, 2012) e otimização de algoritmos para Big Data (Facchinei, Sagratella & Scutari, 2014).…”
Section: Computação Paralelaunclassified
“…Outros estudos recentes na área envolvem o uso da Computação Paralela em Nuvem (Ekanayake, Qiu, Gunarathne, Beason & Fox, 2010), criação de linguagens de domínio específico para aplicações paralelas (Janjic et al, 2016), processamento digital de imagens (Olmedo, De La Calleja, Benitez, & Medina, 2012) e otimização de algoritmos para Big Data (Facchinei, Sagratella & Scutari, 2014).…”
Section: Computação Paralelaunclassified
“…In particular, our contributions are as follows: 1) At each time instance, all elements are updated in parallel, and the convergence speed is thus greatly enhanced compared with [7]. As a nontrivial extension of [7] from sequential update to parallel update and [10], [11] from deterministic optimization problems to stochastic optimization problems, we rigorously show that the proposed algorithm almost surely converges.…”
Section: Introductionmentioning
confidence: 95%
“…It is tempting to use the parallel algorithm proposed in [10], [11], but it converges for deterministic optimization problems only. Besides, its convergence speed heavily depends on the decay rate of the diminishing stepsize: on the one hand, a slowly decaying stepsize is preferable to make notable progress in each iteration and to achieve satisfactory convergence speed; on the other hand, theoretical convergence is guaranteed only when the stepsize decays fast enough.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…When the coordinates are updated according to the traditional G-S rule, a few recent works [15][16][17] have proven the 0(1 / t) rate forthe G-S BCD algorithm when applied to certain special convex problems. Some recent works [ 18,19] propose BCD-based algorithms with parallel block update rules. These algorithms are…”
Section: F( (T-1 )mentioning
confidence: 99%