2000
DOI: 10.1007/3-540-45164-1_8
|View full text |Cite
|
Sign up to set email alerts
|

A Comparison of Ranking Methods for Classification Algorithm Selection

Abstract: We investigate the problem of using past performance information to select an algorithm for a given classification problem. We present three ranking methods for that purpose: average ranks, success rate ratios and significant wins. We also analyze the problem of evaluating and comparing these methods. The evaluation technique used is based on a leave-one-out procedure. On each iteration, the method generates a ranking using the results obtained by the algorithms on the training datasets. This ranking is then e… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
63
0
2

Year Published

2001
2001
2023
2023

Publication Types

Select...
6
3
1

Relationship

1
9

Authors

Journals

citations
Cited by 109 publications
(65 citation statements)
references
References 9 publications
(18 reference statements)
0
63
0
2
Order By: Relevance
“…In order to compare these methods comprehensively, we sort them using each measure first and then calculate the average rank over five measures for each method, as listed in the last rows of Tables 3-5. This comparison strategy was recommended in [18]. In Table 3 for the Emotions data, MLC-DWkNN-Dudani1 works the best on all five measures.…”
Section: Results and Analysismentioning
confidence: 99%
“…In order to compare these methods comprehensively, we sort them using each measure first and then calculate the average rank over five measures for each method, as listed in the last rows of Tables 3-5. This comparison strategy was recommended in [18]. In Table 3 for the Emotions data, MLC-DWkNN-Dudani1 works the best on all five measures.…”
Section: Results and Analysismentioning
confidence: 99%
“…Among the classification algorithms, the selection of the most adequate classification algorithm that fits to a specific problem is a difficult task (Brazdil and Soares, 2000). Training all classifiers to constitute an ensemble is a complex task and can lead to increased computational time and costs.…”
Section: Measuring Classification Methodsmentioning
confidence: 99%
“…In the context of Label Ranking it is common to use the average ranking as the consensus ranking (Brazdil et al 2000). The average ranking is obtained by computing the average of the ranks, where the label with the lowest values is ranked in first place, and so on.…”
Section: Label Rankingmentioning
confidence: 99%