2018
DOI: 10.48550/arxiv.1807.05118
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Tune: A Research Platform for Distributed Model Selection and Training

Richard Liaw,
Eric Liang,
Robert Nishihara
et al.

Abstract: Modern machine learning algorithms are increasingly computationally demanding, requiring specialized hardware and distributed computation to achieve high performance in a reasonable time frame. Many hyperparameter search algorithms have been proposed for improving the efficiency of model selection, however their adaptation to the distributed compute environment is often ad-hoc. We propose Tune, a unified framework for model selection and training that provides a narrow-waist interface between training scripts … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
243
0
1

Year Published

2020
2020
2024
2024

Publication Types

Select...
5
4
1

Relationship

0
10

Authors

Journals

citations
Cited by 240 publications
(283 citation statements)
references
References 7 publications
0
243
0
1
Order By: Relevance
“…For network design and training, Tensorflow 2.0 (Abadi et al, 2016) and Keras 2.3 (Gulli & Pal, 2017) have been used. As model optimization framework, Tune 1.6.0 (Liaw et al, 2018) was selected.…”
Section: Appendixmentioning
confidence: 99%
“…For network design and training, Tensorflow 2.0 (Abadi et al, 2016) and Keras 2.3 (Gulli & Pal, 2017) have been used. As model optimization framework, Tune 1.6.0 (Liaw et al, 2018) was selected.…”
Section: Appendixmentioning
confidence: 99%
“…A ResNet-50 [17] deep learning model, which was pre-trained on the ImageNet dataset [18], Model preparation and training were done using the PyTorch deep learning framework [19] and tuning of hyperparameters was done using the RayTune framework [20].…”
Section: Isolation Of Particulate Microscopy Imagesmentioning
confidence: 99%
“…We use the LBFGS optimizer (max iter=20, and history size=10) along with the Optuna library [3] in the Ray hyperparameter tuning framework [49]. Each dataset gets a budget of 200 trials to pick the best parameters on validation set.…”
Section: D1 Transfer Learningmentioning
confidence: 99%