2009
DOI: 10.1109/tpami.2008.224
|View full text |Cite
|
Sign up to set email alerts
|

Optimal Classifier Fusion in a Non-Bayesian Probabilistic Framework

Abstract: The combination of the output of classifiers has been one of the strategies used to improve classification rates in general purpose classification systems. Some of the most common approaches can be explained using the Bayes' formula. In this paper, we tackle the problem of the combination of classifiers using a non-Bayesian probabilistic framework. This approach permits us to derive two linear combination rules that minimize misclassification rates under some constraints on the distribution of classifiers. In … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
39
0

Year Published

2012
2012
2019
2019

Publication Types

Select...
5
1
1

Relationship

0
7

Authors

Journals

citations
Cited by 69 publications
(39 citation statements)
references
References 34 publications
0
39
0
Order By: Relevance
“…symbol recognition [13], activity recognition [14,18], and orientation estimation [7]. This paper provides a generic view on the R-transform and the R-signature while maintaining their beneficial properties, leading to two main theoretical contributions.…”
Section: Previous Workmentioning
confidence: 99%
“…symbol recognition [13], activity recognition [14,18], and orientation estimation [7]. This paper provides a generic view on the R-transform and the R-signature while maintaining their beneficial properties, leading to two main theoretical contributions.…”
Section: Previous Workmentioning
confidence: 99%
“…Max, Min, Median and Majority Vote, can be derived. Besides these combination rules developed under Bayesian framework [3], Terrades et al [7] tackled the classifier combination problem using a non-Bayesian probabilistic framework. Under the assumptions that classifiers can be combined linearly and the scores follow independent normal distribution, the independent normal (IN) combination rule was derived [7].…”
Section: Related Work On Classifier Fusionmentioning
confidence: 99%
“…For example, in [5], Parzen window density estimation was used to estimate the joint density of posterior probabilities by a selected set of classifiers. Since it needs numerous data to ensure that estimation of the joint distribution is accurate [11], Terrades et al [7] proposed to combine classifiers by a linear model under normal distribution assumption. When features are not conditionally independent, the covariance matrix in the normal distribution is not diagonal.…”
Section: Related Work On Classifier Fusionmentioning
confidence: 99%
See 2 more Smart Citations