2020
DOI: 10.48550/arxiv.2011.11846
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

AutoWeka4MCPS-AVATAR: Accelerating Automated Machine Learning Pipeline Composition and Optimisation

Abstract: Automated machine learning pipeline (ML) composition and optimisation aim at automating the process of finding the most promising ML pipelines within allocated resources (i.e., time, CPU and memory). Existing methods, such as Bayesian-based and genetic-based optimisation, which are implemented in Auto-Weka, Auto-sklearn and TPOT, evaluate pipelines by executing them. Therefore, the pipeline composition and optimisation of these methods frequently require a tremendous amount of time that prevents them from expl… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2021
2021
2021
2021

Publication Types

Select...
1

Relationship

1
0

Authors

Journals

citations
Cited by 1 publication
(1 citation statement)
references
References 10 publications
0
1
0
Order By: Relevance
“…To explore these research questions, we run a series of experiments with the AutoWeka4MCPS package [1], which is accelerated by the ML-pipeline validity checker, AVATAR [21], wherever specified. All experiments revolve around a meta-knowledge base that is built by using loose assumptions to convert limited SMAC-based AutoML runs across 20 datasets into mean-error statistics and associated performance rankings for 30 Weka predictors, both overall and per dataset.…”
Section: Introductionmentioning
confidence: 99%
“…To explore these research questions, we run a series of experiments with the AutoWeka4MCPS package [1], which is accelerated by the ML-pipeline validity checker, AVATAR [21], wherever specified. All experiments revolve around a meta-knowledge base that is built by using loose assumptions to convert limited SMAC-based AutoML runs across 20 datasets into mean-error statistics and associated performance rankings for 30 Weka predictors, both overall and per dataset.…”
Section: Introductionmentioning
confidence: 99%