2014
DOI: 10.1109/msp.2014.2329397
|View full text |Cite
|
Sign up to set email alerts
|

Convex Optimization for Big Data: Scalable, randomized, and parallel algorithms for big data analytics

Abstract: This article reviews recent advances in convex optimization algorithms for Big Data, which aim to reduce the computational, storage, and communications bottlenecks. We provide an overview of this emerging field, describe contemporary approximation techniques like first-order methods and randomization for scalability, and survey the important role of parallel and distributed computation.The new Big Data algorithms are based on surprisingly simple principles and attain staggering accelerations even on classical … Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
261
0

Year Published

2015
2015
2022
2022

Publication Types

Select...
7
1
1

Relationship

0
9

Authors

Journals

citations
Cited by 275 publications
(261 citation statements)
references
References 23 publications
(61 reference statements)
0
261
0
Order By: Relevance
“…It would be imprudent to claim that we have covered all such methods, even in this narrow subarea of research. We thus hasten to add references to surveys about sublineartime algorithms [16], streaming algorithms [42], and convex optimization [12]. Even for the methods mentioned here, space limitations did not allow us to go into too much detail, so we focussed on some easily accessible examples.…”
Section: Discussionmentioning
confidence: 99%
“…It would be imprudent to claim that we have covered all such methods, even in this narrow subarea of research. We thus hasten to add references to surveys about sublineartime algorithms [16], streaming algorithms [42], and convex optimization [12]. Even for the methods mentioned here, space limitations did not allow us to go into too much detail, so we focussed on some easily accessible examples.…”
Section: Discussionmentioning
confidence: 99%
“…The 6 datasets, the generated dataset code and our source codes are published for in the link. 4 Environment settings: we develop Algorithm 1 in MAT-LAB with embedded code C++ to compare them with other algorithms. For NNLS, we set system parameters to use only 1 CPU for MATLAB and the IO time is excluded in the machine 8-Core Intel Xeon E5 3 GHz.…”
Section: Experimental Evaluationmentioning
confidence: 99%
“…In addition, exact algorithms often lack flexibility for low-rank regularized variants and also have high complexity and slow convergence. Hence, fast approximate algorithms based on the first-order methods are more preferred to naturally provide a flexible framework for low-rank models [4,[9][10][11].…”
Section: Introductionmentioning
confidence: 99%
“…Efficient numerical methods to obtain x in the context of large scale problems arising in big data applications are, namely, first order methods, randomization as well as parallel and distributed computing [8].…”
Section: Big Data Analytics and Distributed Proximal Algorithmsmentioning
confidence: 99%