Proceedings of the Twenty-Sixth International Joint Conference on Artificial Intelligence 2017
DOI: 10.24963/ijcai.2017/444
|View full text |Cite
|
Sign up to set email alerts
|

Multi-Positive and Unlabeled Learning

Abstract: The positive and unlabeled (PU) learning problem focuses on learning a classifier from positive and unlabeled data. Some methods have been developed to solve the PU learning problem. However, they are often limited in practical applications, since only binary classes are involved and cannot easily be adapted to multi-class data. Here we propose a onestep method that directly enables multi-class model to be trained using the given input multi-class data and that predicts the label based on the model decision. S… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
59
0

Year Published

2019
2019
2022
2022

Publication Types

Select...
4
3
3

Relationship

1
9

Authors

Journals

citations
Cited by 52 publications
(59 citation statements)
references
References 11 publications
0
59
0
Order By: Relevance
“…Elkan et al [17] showed that the probability output by a classifier trained on positive and unlabeled examples are different. MPU [18] success to select the multiclass data from the unlabeled data by multi-positive loss. Xu et al [19] used the PU classifier to find data from unlabeled samples in the cloud based on a few labeled samples, and realized knowledge transfer using only a small number of labeled samples.…”
Section: B Positive-unlabel Classificationmentioning
confidence: 99%
“…Elkan et al [17] showed that the probability output by a classifier trained on positive and unlabeled examples are different. MPU [18] success to select the multiclass data from the unlabeled data by multi-positive loss. Xu et al [19] used the PU classifier to find data from unlabeled samples in the cloud based on a few labeled samples, and realized knowledge transfer using only a small number of labeled samples.…”
Section: B Positive-unlabel Classificationmentioning
confidence: 99%
“…The negative cases (N) identified by CSRs are indeed hard cases, since their prediction scores are above the preset confidence threshold yet they are mis-classified by the existing model. It is an active research area to create classifiers with only P and U (Elkan and Noto, 2008;Xu et al, 2017). Some research has explored models that also include N, but they have been only concerned with binary classifiers (Fei and Liu, 2015;Hsieh et al, 2019;Li et al, 2010).…”
Section: Figure 1: Intelligent Customer Support Loopmentioning
confidence: 99%
“…Note that in addition to the above-mentioned methods, there are some other PU learning algorithms developed in recent years, such as generative adversarial PU learning [32], multi PU learning [33], semi-supervised classification-based PU learning [34], and margin-based PU learning [35].…”
Section: Positive and Unlabeled Learningmentioning
confidence: 99%