2015 International Joint Conference on Neural Networks (IJCNN) 2015
DOI: 10.1109/ijcnn.2015.7280767
|View full text |Cite
|
Sign up to set email alerts
|

Design of the 2015 ChaLearn AutoML challenge

Abstract: ChaLearn is organizing the Automatic Machine Learning (AutoML) contest for IJCNN 2015, which challenges participants to solve classification and regression problems without any human intervention. Participants' code is automatically run on the contest servers to train and test learning machines. However, there is no obligation to submit code; half of the prizes can be won by submitting prediction results only. Datasets of progressively increasing difficulty are introduced throughout the six rounds of the chall… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
67
0

Year Published

2017
2017
2021
2021

Publication Types

Select...
5
2
2

Relationship

2
7

Authors

Journals

citations
Cited by 96 publications
(69 citation statements)
references
References 22 publications
0
67
0
Order By: Relevance
“…This requires good knowledge of the specifics of the data and also of ML methods. A workaround to this is to apply automated ML methods, which iterate intelligently through many possible model configurations and select the best-performing option (Guyon et al 2015;Elsken, Metzen, and Hutter 2018;Feurer et al 2015;Kotthoff et al 2017) . This approach is slow but often works comparatively well.…”
Section: Caveatsmentioning
confidence: 99%
“…This requires good knowledge of the specifics of the data and also of ML methods. A workaround to this is to apply automated ML methods, which iterate intelligently through many possible model configurations and select the best-performing option (Guyon et al 2015;Elsken, Metzen, and Hutter 2018;Feurer et al 2015;Kotthoff et al 2017) . This approach is slow but often works comparatively well.…”
Section: Caveatsmentioning
confidence: 99%
“…For each cross-validation iteration, we aggregate the predictions from all folds and calculate a single predictive performance evaluation, in order to avoid any averaging problems that might arise, especially when the dataset is imbalanced (35). For the classification experiments, we used the following evaluation measures: adjusted balanced accuracy score (36,37), an adaptation of the original accuracy measure that gives higher weights to examples from smaller classes; and the F-score with macro-averaging (38), which is the average F-score among all classes. Both measures treat different subtypes equally.…”
Section: Experimental Evaluation Of ML Algorithmsmentioning
confidence: 99%
“…However, a lack of suitable benchmarks, evaluation protocols and performance metrics has limited progress. Recently, we have organized challenges on Autonomous Machine Learning that have made significant progress in the field, see e.g., [5,4]. The challenges have attracted a large number of participants (almost 1,000 in the combined challenges), providing evidence of the relevance of the problem and the interest from the community.…”
Section: Challenge Setting and Backgroundmentioning
confidence: 99%