2020
DOI: 10.1080/10556788.2020.1808977
|View full text |Cite
|
Sign up to set email alerts
|

COCO: a platform for comparing continuous optimizers in a black-box setting

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
157
0
1

Year Published

2020
2020
2023
2023

Publication Types

Select...
4
4
1

Relationship

0
9

Authors

Journals

citations
Cited by 275 publications
(184 citation statements)
references
References 52 publications
0
157
0
1
Order By: Relevance
“…The difficulty of comparing alternative algorithm implementations and of assessing incremental changes to a given algorithm is now being increasingly recognized (e.g. Hansen et al, 2016;Sörensen, 2013;Weyland, 2012Weyland, , 2015. This has led some algorithm developers to formalize "standard" algorithm implementations that can be used to establish the expected baseline reference behaviour and performance of a given algorithm (Bratton and Kennedy, 2007;Swan et al, 2015).…”
Section: How Do We Implement Eas?mentioning
confidence: 99%
“…The difficulty of comparing alternative algorithm implementations and of assessing incremental changes to a given algorithm is now being increasingly recognized (e.g. Hansen et al, 2016;Sörensen, 2013;Weyland, 2012Weyland, , 2015. This has led some algorithm developers to formalize "standard" algorithm implementations that can be used to establish the expected baseline reference behaviour and performance of a given algorithm (Bratton and Kennedy, 2007;Swan et al, 2015).…”
Section: How Do We Implement Eas?mentioning
confidence: 99%
“…function evaluations provided by computationally intensive simulations). [320] First benchmark, all functions shifted CEC'2011 [321] Real-world problems, small dimensions CEC'2013 [322] Rotated and shifted functions CEC'2014 [323] More multimodal functions CEC'2014 Expensive [324] Reduced number of evaluations CEC'2015 [325] Allowed specific parameter values for functions CEC'2017 [326] Composed test problems by extracting features dimension-wise for several problems BBOB [327] Functions with increasing dimensionality (from small dimensions)…”
Section: Benchmarks and Comparison Methodologiesmentioning
confidence: 99%
“…To check the utility of other, more efficient machine learning methods, we decided to include also linear regression, nearest neighbors and regression trees. For each of the 36 surrogate-relevator combinations, we tuned the five parameters of the meta model by using COCO, the platform for comparing numerical optimization methods in a black-box setting [8]. The parameters were tuned by using grid search with values as shown in Table 1.…”
Section: A Meta-model Tuning and Selectionmentioning
confidence: 99%
“…We select each algorithm in the pair among six alternative algorithms for learning predictive models, previously used in the literature on surrogate-based optimization-linear regression, decision trees, nearest neighbors, support vector machines, Gaussian processes and random forests-leading to 36 meta-model instances. In the first series of experiments, performed on synthetic benchmarks [8], we tune the parameters of each meta-model instance. In turn, we select the most successful instances that significantly outperform the others.…”
Section: Introductionmentioning
confidence: 99%