2011
DOI: 10.1109/tpami.2010.92
|View full text |Cite
|
Sign up to set email alerts
|

Semi-Supervised Learning via Regularized Boosting Working on Multiple Semi-Supervised Assumptions

Abstract: Semi-supervised learning concerns the problem of learning in the presence of labeled and unlabeled data. Several boosting algorithms have been extended to semi-supervised learning with various strategies. To our knowledge, however, none of them takes all three semi-supervised assumptions, i.e., smoothness, cluster, and manifold assumptions, together into account during boosting learning. In this paper, we propose a novel cost functional consisting of the margin cost on labeled data and the regularization penal… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
5

Citation Types

0
62
0
3

Year Published

2012
2012
2021
2021

Publication Types

Select...
4
4

Relationship

0
8

Authors

Journals

citations
Cited by 124 publications
(65 citation statements)
references
References 14 publications
0
62
0
3
Order By: Relevance
“…With SSC we may pursue two different objectives: transductive and inductive classification [5]. The former is devoted to predicting the correct labels of a set of unlabeled examples that is also used during the training phase.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…With SSC we may pursue two different objectives: transductive and inductive classification [5]. The former is devoted to predicting the correct labels of a set of unlabeled examples that is also used during the training phase.…”
Section: Introductionmentioning
confidence: 99%
“…As regards examples of models based on the cluster assumption, we can find generative models [9] or semi-supervised support vector machines [10]. Recent studies have addressed multiple assumptions in one model [11], [5], [12].…”
Section: Introductionmentioning
confidence: 99%
“…SemiBoost combines supervised learning with semisupervised learning by utilizing both the labelled and unlabelled training data [18]. Its purpose is to reduce the generalization error when the labelled training data are insufficient [19]. FloatBoost deletes the less effective weak classifiers during the training by a backtracking mechanism so that it outperforms AdaBoost when it has the same number of weak classifiers as AdaBoost [20].…”
Section: Introductionmentioning
confidence: 99%
“…Some SSIL algorithms have been developed recently, such as RegBoost [22] and SemiBoost [23]. A more detailed review on semi-supervised learning can be found in [24].…”
Section: Introductionmentioning
confidence: 99%
“…Manifold assumes that the high-dimensional data lie on a low-dimensional nonlinear manifold. Properties of the manifold ensure more accurate density estimation or more appropriate similarity measures [22].…”
Section: Introductionmentioning
confidence: 99%