2001
DOI: 10.1007/pl00011679
|View full text |Cite
|
Sign up to set email alerts
|

Arbitrating Among Competing Classifiers Using Learned Referees

Abstract: The situation in which the results of several different classifiers and learning algorithms are obtainable for a single classification problem is common. In this paper, we propose a method that takes a collection of existing classifiers and learning algorithms, together with a set of available data, and creates a combined classifier that takes advantage of all of these sources of knowledge. The basic idea is that each classifier has a particular subdomain for which it is most reliable. Therefore, we induce a r… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

3
61
0
10

Year Published

2003
2003
2009
2009

Publication Types

Select...
4
2
2

Relationship

0
8

Authors

Journals

citations
Cited by 64 publications
(75 citation statements)
references
References 35 publications
(31 reference statements)
3
61
0
10
Order By: Relevance
“…Here we take the predictions of learning algorithms as relevant information in an attempt to improve the original example representation. Alternatively, one may induce referees which capture the area of expertise of each base learner and arbitrate among them by selecting the most reliable base learner for the examples in each subdomain (Ortega, Koppel, & Argamon, 2001). …”
Section: Exploiting Meta-knowledge With a Set Of Learning Algorithmsmentioning
confidence: 99%
“…Here we take the predictions of learning algorithms as relevant information in an attempt to improve the original example representation. Alternatively, one may induce referees which capture the area of expertise of each base learner and arbitrate among them by selecting the most reliable base learner for the examples in each subdomain (Ortega, Koppel, & Argamon, 2001). …”
Section: Exploiting Meta-knowledge With a Set Of Learning Algorithmsmentioning
confidence: 99%
“…In contrast, delegation produces models which are completely and exclusively defined in terms of the original attributes and class. Arbitrating (Ortega et al, 2001) and grading (Seewald & Fürnkranz, 2001) are also related to delegation, but both learn external referees to assess the probability of error of each classifier from the pool of base classifiers, and their areas of expertise. No new attributes are generated.…”
Section: Introductionmentioning
confidence: 99%
“…Grading. The defining feature of methods in this category (also known as referee method [5,6]) is that, instead of directly finding the relationship between the predictions of the base classifier and the actual class (as in stacking); the meta-classifier grades the base-classifiers, and selects either a single or subset of base-classifier(s) which are likely to be correct for the given test instance. The intuition behind grading is that in large datasets where there may be multiple functions defining the relationship between predictor and response variables, it is important to choose the correct function for any given test instance.…”
Section: Introductionmentioning
confidence: 99%