2014
DOI: 10.1007/978-3-319-12640-1_66
|View full text |Cite
|
Sign up to set email alerts
|

Evaluation Protocol of Early Classifiers over Multiple Data Sets

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
3
0

Year Published

2015
2015
2023
2023

Publication Types

Select...
2
1
1

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(4 citation statements)
references
References 7 publications
0
3
0
Order By: Relevance
“…If we take that value as a worst-case scenario we can conduct several statistical tests to show the significance of the obtained results. In particular, we applied the Friedman and Nemenyi statistics in order to look for statistical significance among the obtained performances [20]. In order to compare the performances obtained for each of the three methods on the two datasets, Table 4 shows the mean rank of each method considering the 14 different scores, corresponding to the different label performances.…”
Section: Resultsmentioning
confidence: 99%
“…If we take that value as a worst-case scenario we can conduct several statistical tests to show the significance of the obtained results. In particular, we applied the Friedman and Nemenyi statistics in order to look for statistical significance among the obtained performances [20]. In order to compare the performances obtained for each of the three methods on the two datasets, Table 4 shows the mean rank of each method considering the 14 different scores, corresponding to the different label performances.…”
Section: Resultsmentioning
confidence: 99%
“…• Boosting (AB): este método separa conjuntos de dados reamostrados onde diferente de Bagging, o nível de importância do voto neste método é baseado no desempenho de cada modelo, ao invés de todos os conjuntos terem o mesmo peso (Batista, 2022). Nemenyi, que realiza uma observação pareada de um conjunto de séries de dados e aponta qual a diferença estatística entre elas (Demšar, 2006).…”
Section: Processo De Aprendizadounclassified
“…One of most useful technique for statistical evaluation of an early classifier is proposed in [78]. As early classifiers address two conflicting objectives (i.e., earliness and accuracy) together, comparing the statistical significance of one early classifier with other becomes more challenging.…”
Section: Statistical Evaluation Of Early Classifiermentioning
confidence: 99%
“…As early classifiers address two conflicting objectives (i.e., earliness and accuracy) together, comparing the statistical significance of one early classifier with other becomes more challenging. In [78], the authors therefore employed two well known statistical methods including Wilcoxon signed-rank test [79] and Pareto optimum [80], for evaluating the early classifiers on many UCR datasets [19]. The evaluation technique uses Wilcoxon signedrank test for independent comparison where it compares two early classifiers on both objectives independently on same dataset.…”
Section: Statistical Evaluation Of Early Classifiermentioning
confidence: 99%