2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition 2018
DOI: 10.1109/cvpr.2018.00471
|View full text |Cite
|
Sign up to set email alerts
|

An Efficient and Provable Approach for Mixture Proportion Estimation Using Linear Independence Assumption

Abstract: In this paper, we study the mixture proportion estimation (MPE) problem in a new setting: given samples from the mixture and the component distributions, we identify the proportions of the components in the mixture distribution. To address this problem, we make use of a linear independence assumption, i.e., the component distributions are independent from each other, which is much weaker than assumptions exploited in the previous MPE methods. Based on this assumption, we propose a method (1) that uniquely iden… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
16
0
2

Year Published

2018
2018
2025
2025

Publication Types

Select...
3
3
2

Relationship

3
5

Authors

Journals

citations
Cited by 32 publications
(18 citation statements)
references
References 18 publications
0
16
0
2
Order By: Relevance
“…On the other hand, although it is costly to annotate a very large-scale dataset, a small set of easily distinguishable observations are assumed to be available in practice. This assumption is also widely used in estimating transition probabilities in label noise problem [31] and class priors in semisupervised learning [37]. Therefore, in order to estimate Q, we manually assign true labels to 5 or 10 observations in each class.…”
Section: Estimating Qmentioning
confidence: 99%
“…On the other hand, although it is costly to annotate a very large-scale dataset, a small set of easily distinguishable observations are assumed to be available in practice. This assumption is also widely used in estimating transition probabilities in label noise problem [31] and class priors in semisupervised learning [37]. Therefore, in order to estimate Q, we manually assign true labels to 5 or 10 observations in each class.…”
Section: Estimating Qmentioning
confidence: 99%
“…Here, we assume the noise level τ is known and set R(T ) = 1 − min{ T T k τ, τ } with T k =10. If τ is not known in advanced, it can be inferred using validation sets [33,75]. As for performance measurement, we use test accuracy, i.e., test accuracy = (# of correct prediction) / (# of testing).…”
Section: Experiments On Synthetic Balanced Noisy Datasetsmentioning
confidence: 99%
“…where τ is the estimated noise rate, which can be inferred using validation sets [12,25]. The value of λ t decreases quickly at the first T epochs until reaching 1 − τ .…”
Section: Considering a K-class Classification Problem With The Noisy Training Datamentioning
confidence: 99%