2019
DOI: 10.1109/tkde.2019.2951388
|View full text |Cite
|
Sign up to set email alerts
|

SlimML: Removing Non-critical Input Data in Large-scale Iterative Machine Learning

Abstract: The core of many large-scale machine learning (ML) applications, such as neural networks (NN), support vector machine (SVM), and convolutional neural network (CNN), is the training algorithm that iteratively updates model parameters by processing massive datasets. From a plethora of studies aiming at accelerating ML, being data parallelization and parameter server, the prevalent assumption is that all data points are equivalently relevant to model parameter updating. In this paper, we challenge this assumption… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
4
3

Relationship

1
6

Authors

Journals

citations
Cited by 8 publications
(2 citation statements)
references
References 24 publications
0
2
0
Order By: Relevance
“…We call this method "forgetfulness." Reducing the training set size before training has been previously explored (Ohno-Machado et al, 1998;Han et al, 2021) to reduce training time -we do it dynamically to improve performance.…”
Section: Forgetful Meta-learningmentioning
confidence: 99%
“…We call this method "forgetfulness." Reducing the training set size before training has been previously explored (Ohno-Machado et al, 1998;Han et al, 2021) to reduce training time -we do it dynamically to improve performance.…”
Section: Forgetful Meta-learningmentioning
confidence: 99%
“…Deep neural networks (DNNs) have become ubiquitous in computer vision applications [15], spanning from image recognition [10,48] to object detection [46,59] to video analytics [20]. Today's mobile systems run multiple vision related applications which are based on DNN models [4,9,33].…”
Section: Introductionmentioning
confidence: 99%