2017
DOI: 10.1007/s10994-017-5675-z
|View full text |Cite
|
Sign up to set email alerts
|

Learning safe multi-label prediction for weakly labeled data

Abstract: In this paper we study multi-label learning with weakly labeled data, i.e., labels of training examples are incomplete, which commonly occurs in real applications, e.g., image classification, document categorization. This setting includes, e.g., (i) semi-supervised multi-label learning where completely labeled examples are partially known; (ii) weak label learning where relevant labels of examples are partially known; (iii) extended weak label learning where relevant and irrelevant labels of examples are parti… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
5

Citation Types

0
11
0

Year Published

2018
2018
2022
2022

Publication Types

Select...
7
2
1

Relationship

3
7

Authors

Journals

citations
Cited by 29 publications
(11 citation statements)
references
References 21 publications
0
11
0
Order By: Relevance
“…supervised multi-label learning (Wei et al 2018), semisupervised weak label learning (Dong, Li, and Zhou 2018) and semi-supervised partial label learning (Wang, Li, and Zhou 2019). It is worth noting that our paper is different from these works as we focus on the mix of severe label noise and biased label distribution.…”
Section: Related Workmentioning
confidence: 97%
“…supervised multi-label learning (Wei et al 2018), semisupervised weak label learning (Dong, Li, and Zhou 2018) and semi-supervised partial label learning (Wang, Li, and Zhou 2019). It is worth noting that our paper is different from these works as we focus on the mix of severe label noise and biased label distribution.…”
Section: Related Workmentioning
confidence: 97%
“…Step size for w supervised multi-label learning (Wei et al 2018), semisupervised weak label learning (Dong, Li, and Zhou 2018) and semi-supervised partial label learning (Wang, Li, and Zhou 2019). It is worth noting that our paper is different from these works as we focus on the mix of severe label noise and biased label distribution.…”
Section: Related Workmentioning
confidence: 99%
“…Semi-Supervised Learning (SSL) (Chapelle, Schölkopf, and Zien 2006;Zhou and Li 2010;Wei et al 2018;Li, Guo, and Zhou 2019) aims to make use of unlabeled data for training -typically a small set of labeled data together with a large collection of unlabeled data. Graph-based SSL algorithms (Zhu, Ghahramani, and Lafferty 2003;Zhou et al 2003;Li, Wang, and Zhou 2016) have a long history of work and propagate limited label information to unlabeled examples following clustering or manifold assumptions.…”
Section: Related Workmentioning
confidence: 99%