2011
DOI: 10.1016/j.swevo.2011.02.002
|View full text |Cite
|
Sign up to set email alerts
|

A practical tutorial on the use of nonparametric statistical tests as a methodology for comparing evolutionary and swarm intelligence algorithms

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

3
1,635
0
62

Year Published

2014
2014
2021
2021

Publication Types

Select...
6
2

Relationship

0
8

Authors

Journals

citations
Cited by 4,375 publications
(1,850 citation statements)
references
References 29 publications
3
1,635
0
62
Order By: Relevance
“…In addition, we hope to show the performance of DCABC by Null Hypothesis Significance Testing (NHST) [35,36] in our future work. We only test the new algorithm on classical benchmark functions and have not used it to solve practical problems, such as fault diagnosis [37], path plan [38], Knapsack [39][40][41], multi-objective optimization [42], gesture segmentation [43], unit commitment problem [44], and so on.…”
Section: Resultsmentioning
confidence: 99%
“…In addition, we hope to show the performance of DCABC by Null Hypothesis Significance Testing (NHST) [35,36] in our future work. We only test the new algorithm on classical benchmark functions and have not used it to solve practical problems, such as fault diagnosis [37], path plan [38], Knapsack [39][40][41], multi-objective optimization [42], gesture segmentation [43], unit commitment problem [44], and so on.…”
Section: Resultsmentioning
confidence: 99%
“…To verify this hypothesis, statistical analysis was performed. Table 4 reports average ranking of the methods after the Friedman test (Derrac et al, 2011). For each configuration, the lowest rank is marked in bold.…”
Section: Comparison Of the Algorithmsmentioning
confidence: 99%
“…For each configuration, the lowest rank is marked in bold. Next, statistically important (based on the Shaffer test with α = 0.05 level of significance (Derrac et al, 2011) and ranking after the Friedman test). According to the results, the CG rank is always the smallest and very close to 1 while the differences between CG results and results of any other non-exact method are statistically important.…”
Section: Comparison Of the Algorithmsmentioning
confidence: 99%
“…In order to comply with reported best practice in the evaluation of the performance of neural networks, (Luengo et al, 2009;García et al, 2010;Derrac et al, 2011), we evaluated the statistical significance of the observed performance results applying the Friedman test. This test ranks the performance of a set of k algorithms and can detect a significant difference in the performance of at least two algorithms.…”
Section: Non-parametric Statistical Analysis and Posthoc Proceduresmentioning
confidence: 99%
“…For the sake of our evaluation we need to carry out a multiple comparisons analysis between performance of the LIT-Approach and performance of each one of the other initialization methods. This is a multiple comparisons (pairwise) analysis (Derrac et al, 2011) with a control algorithm which results in formulating k − 1 hypotheses one for each of the k − 1 comparisons, where in our case k = 7. A better performance for the convergence rate of an algorithm trans- lates here to a smaller number of epochs and a better performance for generalization is taken to be a smaller classification or approximation error.…”
Section: Non-parametric Statistical Analysis and Posthoc Proceduresmentioning
confidence: 99%