Proceedings of the Genetic and Evolutionary Computation Conference Companion 2019
DOI: 10.1145/3319619.3326888
|View full text |Cite
|
Sign up to set email alerts
|

Bayesian performance analysis for black-box optimization benchmarking

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
32
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
5
1

Relationship

1
5

Authors

Journals

citations
Cited by 26 publications
(34 citation statements)
references
References 16 publications
0
32
0
Order By: Relevance
“…run a local search algorithm 2 to find a local optimum. Then, for each of the four distance-metrics, (Hamming, Cayley, Kendall's-τ and Ulam), the average normalized difference in the objective value with respect to the local optimum is computed for ∀k ∈ [14]. Specifically, defining σ 0 as the local optimum, for each of the metrics, we approximate the difference ψ −1…”
Section: Distance-metricsmentioning
confidence: 99%
See 2 more Smart Citations
“…run a local search algorithm 2 to find a local optimum. Then, for each of the four distance-metrics, (Hamming, Cayley, Kendall's-τ and Ulam), the average normalized difference in the objective value with respect to the local optimum is computed for ∀k ∈ [14]. Specifically, defining σ 0 as the local optimum, for each of the metrics, we approximate the difference ψ −1…”
Section: Distance-metricsmentioning
confidence: 99%
“…For each benchmark instance, the results are recorded as the Average Relative Deviation Percentage, ARDP = | f best −fav f best |, where f best is the best known value and f av is the average of the best objective values obtained in each repetition. For further statistical analysis, Bayesian Performance Analysis [14,13] (BPA) is carried out to study the uncertainty of the results of each experiment 5 Specifically, Placket-Luce is used as the probability model, defined in S n , in this case, corresponding to the rankings of the algorithms. BPA considers probability distributions over probability distributions.…”
Section: General Remarksmentioning
confidence: 99%
See 1 more Smart Citation
“…This reliance on experimental assessment and comparison of algorithms is evidenced by the continuing effort of researchers in devising better experimental protocols for performance assessment and comparison of algorithms. While many of the most important points were presented as far back as the late 1990s (Barr et al 1995;McGeoch 1996;Hooker 1996), research into adequate protocols and tools for comparing algorithms has continued in the past two decades, with several statistical approaches being proposed and employed for comparing the performance of algorithms (Coffin and Saltzman 2000;Johnson 2002;Yuan and Gallagher 2004;Demšar 2006;Yuan and Gallagher 2009;Birattari 2004;Birattari and Dorigo 2007;Bartz-Beielstein 2006;Bartz-Beielstein et al 2010;García et al 2008García et al , 2010Derrac et al 2011;Carrano et al 2011;Derrac et al 2014;Benavoli et al 2014;Krohling et al 2015;Hansen et al 2016;Campelo and Takahashi 2019;Calvo et al 2019). This increased prevalence of more statistically sound experiments in the field of optimisation heuristics can be seen as part of the transition of the area into what has been called the scientific period of research on metaheuristics (Sörensen et al 2018).…”
Section: Introductionmentioning
confidence: 99%
“…Other methods of analysis, both analytic and graphical, can be useful for answering distinct questions related to the performance of algorithms on individual instances or problem classes. Bayesian approaches, in particular, have been gaining popularity for the comparison of machine learning and optimisation algorithms (Benavoli et al 2017;Calvo et al 2019). While the methods proposed here are placed within the framework of frequentist statistics, the general sampling approach can be easily adapted to Bayesian estimation and hypothesis testing, which also requires adequate sampling practices.…”
Section: Introductionmentioning
confidence: 99%