Proceedings of the 24th International Conference on Machine Learning 2007
DOI: 10.1145/1273496.1273507
|View full text |Cite
|
Sign up to set email alerts
|

Discriminative learning for differing training and test distributions

Abstract: We address classification problems for which the training instances are governed by a distribution that is allowed to differ arbitrarily from the test distribution-problems also referred to as classification under covariate shift. We derive a solution that is purely discriminative: neither training nor test distribution are modeled explicitly. We formulate the general problem of learning under covariate shift as an integrated optimization problem. We derive a kernel logistic regression classifier for differing… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
233
0
1

Year Published

2008
2008
2024
2024

Publication Types

Select...
7
1
1

Relationship

0
9

Authors

Journals

citations
Cited by 318 publications
(234 citation statements)
references
References 10 publications
0
233
0
1
Order By: Relevance
“…As another approach, one can use logistic regression for the inference of density ratios, since the ratio of two probability densities is directly connected to the posterior probability of labels in classification problems. Using the Bayes formula, the estimated posterior probability can be transformed to an estimator of density ratios (Bickel et al 2007). …”
Section: Introductionmentioning
confidence: 99%
“…As another approach, one can use logistic regression for the inference of density ratios, since the ratio of two probability densities is directly connected to the posterior probability of labels in classification problems. Using the Bayes formula, the estimated posterior probability can be transformed to an estimator of density ratios (Bickel et al 2007). …”
Section: Introductionmentioning
confidence: 99%
“…These estimates are compared with our best estimate of the true values -the gold standard computed over the entire corpus -in Table 3.2. Table 3.3 shows the result of splitting the values of head:enron into three discrete ranges: [0-9], [10][11][12][13][14][15][16][17][18][19], [20][21][22][23][24][25][26][27][28][29][30], and the effect of two choices of α and β. Values in the center range clearly predict spam, while extreme values predict non-spam.…”
Section: An Examplementioning
confidence: 99%
“…The positive class expansion problem appears to have some relationship with PULearning [12,17], concept drift [9,10], and covariate shift [8,1]. But in fact it is very different from these tasks.…”
Section: Related Workmentioning
confidence: 99%
“…Approaches (e.g. [8,1]) addressing this problem try to correct the bias in the training instances, such that minimizing error on the training instances corresponds to minimizing error on the test instances.…”
Section: Related Workmentioning
confidence: 99%