1996
DOI: 10.1007/bf00058611
|View full text |Cite
|
Sign up to set email alerts
|

Error reduction through learning multiple descriptions

Abstract: Abstract. Learning multiple descriptions for each class in the data has been shown to reduce generalization error but the amount of error reduction varies greatly from domain to domain. This paper presents a novel empirical analysis that helps to understand this variation. Our hypothesis is that the amount of error reduction is linked to the "degree to which the descriptions for a class make errors in a correlated manner." We present a precise and novel definition for this notion and use twenty-nine data sets … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

1
132
0
3

Year Published

1998
1998
2014
2014

Publication Types

Select...
7
2

Relationship

0
9

Authors

Journals

citations
Cited by 166 publications
(140 citation statements)
references
References 19 publications
1
132
0
3
Order By: Relevance
“…This work is fundamentally different from other recent machine learning work on combining multiple models (Ali & Pazzani, 1996). That work combines models in order to boost performance for a fixed cost and class distribution.…”
Section: Limitations and Future Workmentioning
confidence: 98%
“…This work is fundamentally different from other recent machine learning work on combining multiple models (Ali & Pazzani, 1996). That work combines models in order to boost performance for a fixed cost and class distribution.…”
Section: Limitations and Future Workmentioning
confidence: 98%
“…Diversity is a desirable property of an ensemble of classifiers, (Ali & Pazzani, 1996;Tumer & Ghosh, 1995). One metric of diversity in an ensemble of classifiers is the error correlation.…”
Section: Measuring Diversitymentioning
confidence: 99%
“…Previous work in the field has demonstrated that a combined classifier often outperforms all individual systems involved [1].…”
Section: Classifiers Combinationmentioning
confidence: 99%