2012
DOI: 10.1016/j.patcog.2011.10.005
|View full text |Cite
|
Sign up to set email alerts
|

Cost-conscious comparison of supervised learning algorithms over multiple data sets

Abstract: We propose Multi 2 Test for ordering multiple learning algorithms on multiple data sets from "best" to "worst." Our goodness measure uses a prior cost term additional to generalization error. Our simulations show that Multi 2 Test generates orderings using pairwise tests on error and different types of cost.2

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
15
0
1

Year Published

2012
2012
2016
2016

Publication Types

Select...
4
3

Relationship

0
7

Authors

Journals

citations
Cited by 41 publications
(16 citation statements)
references
References 19 publications
0
15
0
1
Order By: Relevance
“…The number of runs is reduced to 51 (as suggested in Liang et al 2013). The statistical significance of pairwise comparisons among all 14 algorithms is tested by means of the Friedman's test with post hoc Shaffer's static procedure (Shaffer 1986), as suggested in Garcia and Herrera (2008) and Ulas et al (2012). The MATLAB implementation of Shaffer's procedure, developed by Ulas et al (2012), and available at www.cmpe.boun.edu.tr/~ulas/m2test/details.php, is applied in this study.…”
Section: Methods For Structural Bias Detectionmentioning
confidence: 99%
“…The number of runs is reduced to 51 (as suggested in Liang et al 2013). The statistical significance of pairwise comparisons among all 14 algorithms is tested by means of the Friedman's test with post hoc Shaffer's static procedure (Shaffer 1986), as suggested in Garcia and Herrera (2008) and Ulas et al (2012). The MATLAB implementation of Shaffer's procedure, developed by Ulas et al (2012), and available at www.cmpe.boun.edu.tr/~ulas/m2test/details.php, is applied in this study.…”
Section: Methods For Structural Bias Detectionmentioning
confidence: 99%
“…For the significance tests, the procedure suggested by Demsar [5] is utilized by using the Friedman significance test with the Nemenyi post-hoc procedure, with p < 0.05. The toolbox used in the implementation is provided in [15]. Here, the ensembles pruned by AcEc − P with the given pruning rates are assigned 0 if there is no statistical difference between them and the unpruned ensemble, and 1 otherwise.…”
Section: Discussion Of the Resultsmentioning
confidence: 99%
“…As there are many different kernels for SVM available it is important to choose a suitable one, in accordance with the requirements of the data, in order to achieve the best classification results. A more complex kernel adds to the computation cost of the algorithm compared to a linear SVM (Ulaş et al 2012). However, given the recent development in ICT, most optimization problems of high-dimensionality, even…”
Section: Theoretical Background 1 Svm As a Classification Technique Hmentioning
confidence: 99%