2003
DOI: 10.1023/a:1022859003006
|View full text |Cite
|
Sign up to set email alerts
|

Untitled

Abstract: Abstract. Diversity among the members of a team of classifiers is deemed to be a key issue in classifier combination. However, measuring diversity is not straightforward because there is no generally accepted formal definition. We have found and studied ten statistics which can measure diversity among binary classifier outputs (correct or incorrect vote for the class label): four averaged pairwise measures (the Q statistic, the correlation, the disagreement and the double fault) and six non-pairwise measures (… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

6
166
0
1

Year Published

2011
2011
2021
2021

Publication Types

Select...
6
2

Relationship

0
8

Authors

Journals

citations
Cited by 1,889 publications
(199 citation statements)
references
References 31 publications
6
166
0
1
Order By: Relevance
“…These data were combined and seven different data sets were generated. (16)(17)(18) Mean, variance, standard deviation, skewness, and kurtosis were extracted as five time-domain features, and energy and entropy were extracted as two frequency-domain features. (19,20) These selected features were used as original input features for the model algorithm and selected once again in the internal RF algorithm.…”
Section: Rf Algorithmmentioning
confidence: 99%
“…These data were combined and seven different data sets were generated. (16)(17)(18) Mean, variance, standard deviation, skewness, and kurtosis were extracted as five time-domain features, and energy and entropy were extracted as two frequency-domain features. (19,20) These selected features were used as original input features for the model algorithm and selected once again in the internal RF algorithm.…”
Section: Rf Algorithmmentioning
confidence: 99%
“…Therefore, we should be able to solve our new challenge without the need to build a totally new model. The idea of mixing many different models is very old in machine learning literature (Rasmussen & Ghahramani 1991;Jordan & Jacobs 1993;Meir 1996;Breiman 2001;Kuncheva & Whitaker 2003;Kuncheva 2007;Bishop & Svensen 2012;Chamroukhi 2015). These approaches are guided by the "divide and conquer" principle, in which each expert focuses on a particular area of feature space.…”
Section: A Model That Classifies Rr Lyraementioning
confidence: 99%
“…The idea of studying model outputs has been used before, but in different contexts, such as anomaly detection (Nun et al 2014) and measurements of diversity (Kuncheva & Whitaker 2003).…”
Section: A Model That Classifies Rr Lyraementioning
confidence: 99%
“…A number of diversity measures have been proposed over the years [1,2,3]. Most measures have been derived intuitively, as attempts to formally characterize the pattern of error of individual classifiers (e.g., the Double-Fault and Disagreement measures [2]).…”
Section: Introductionmentioning
confidence: 99%
“…A few other measures have been inspired by exact error decompositions derived in the regression field, despite the lack of a direct analogy to classification problems [5]. The Kohavi-Wolpert Variance [3] (and our attempt in [6]) was inspired by the biasvariance-covariance error decomposition of [7]. The measure derived in [8] (which we extended in [6]) was inspired by the ambiguity decomposition of [9], and provided useful insights, leading to the concept of 'good' and 'bad' patterns of diversity.…”
Section: Introductionmentioning
confidence: 99%