2017 IEEE 33rd International Conference on Data Engineering (ICDE) 2017
DOI: 10.1109/icde.2017.160
|View full text |Cite
|
Sign up to set email alerts
|

Modeling Scalability of Distributed Machine Learning

Abstract: Abstract-Present day machine learning is computationally intensive and processes large amounts of data. It is implemented in a distributed fashion in order to address these scalability issues. The work is parallelized across a number of computing nodes. It is usually hard to estimate in advance how many nodes to use for a particular workload. We propose a simple framework for estimating the scalability of distributed machine learning algorithms. We measure the scalability by means of the speedup an algorithm a… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2017
2017
2019
2019

Publication Types

Select...
5
1

Relationship

0
6

Authors

Journals

citations
Cited by 10 publications
(1 citation statement)
references
References 14 publications
(15 reference statements)
0
1
0
Order By: Relevance
“…The model is aware of different convolution computation strategies, including matrix multiplication and Fast Fourier Transform. [20] models scalability based only on hardware specifications. [14] models training on GPUs.…”
Section: Related Workmentioning
confidence: 99%
“…The model is aware of different convolution computation strategies, including matrix multiplication and Fast Fourier Transform. [20] models scalability based only on hardware specifications. [14] models training on GPUs.…”
Section: Related Workmentioning
confidence: 99%