2002
DOI: 10.1162/089976602753284446
|View full text |Cite
|
Sign up to set email alerts
|

Adjusting the Outputs of a Classifier to New a Priori Probabilities: A Simple Procedure

Abstract: It sometimes happens (for instance in case control studies) that a classifier is trained on a data set that does not reflect the true a priori probabilities of the target classes on real-world data. This may have a negative effect on the classification accuracy obtained on the real-world data set, especially when the classifier's decisions are based on the a posteriori probabilities of class membership. Indeed, in this case, the trained classifier provides estimates of the a posteriori probabilities that are n… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

2
301
0

Year Published

2015
2015
2017
2017

Publication Types

Select...
4
3

Relationship

0
7

Authors

Journals

citations
Cited by 261 publications
(324 citation statements)
references
References 14 publications
2
301
0
Order By: Relevance
“…The main reason is that those methods were not tested as quantifiers. This is for instance the case of [Chan and Ng 2006] in which [Vucetic and Obradovic 2001;Saerens et al 2002] were applied to boost the accuracy of word sense disambiguation systems.…”
Section: Applicationsmentioning
confidence: 99%
See 3 more Smart Citations
“…The main reason is that those methods were not tested as quantifiers. This is for instance the case of [Chan and Ng 2006] in which [Vucetic and Obradovic 2001;Saerens et al 2002] were applied to boost the accuracy of word sense disambiguation systems.…”
Section: Applicationsmentioning
confidence: 99%
“…The goal of [AlaizRodríguez et al 2008] is to design an automatic process to quantify cells proportions. The authors use a combination of computer vision techniques and quantification algorithms [Saerens et al 2002;Forman 2005], see Sections 6.2 and 8.1. The experimental results reported with boar sperm samples using such techniques outperform previous approaches based on classification in terms of several measures, including mean absolute error, KL divergence and mean relative error.…”
Section: Applicationsmentioning
confidence: 99%
See 2 more Smart Citations
“…In [26], bias correction is based on the estimated decisions of the classifier. While in [23], authors estimate the a priori distribution of a new dataset based on the features. The main difference with previous approaches is that we use the similarity scores from automatic classification methods to determine the counts, while previous methods work either directly at the decision level (having less information) or at the feature level (can not use classifier output).…”
Section: Related Workmentioning
confidence: 99%