2013
DOI: 10.3233/ida-130585
|View full text |Cite
|
Sign up to set email alerts
|

Sharpened graph ensemble for semi-supervised learning

Abstract: The generalization ability of a machine learning algorithm varies on the specified values to the model parameters and the degree of noise in the learning dataset. If the dataset has an enough amount of labeled data points, the optimal value for the model parameter can be found via validation by using a subset of the given dataset. However, for semi-supervised learningone of the most recent learning algorithms, this is not as available as in conventional supervised learning. In semi-supervised learning, it is a… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
4
0

Year Published

2013
2013
2020
2020

Publication Types

Select...
5
1

Relationship

0
6

Authors

Journals

citations
Cited by 6 publications
(4 citation statements)
references
References 23 publications
0
4
0
Order By: Relevance
“…The following function (Eq. 5) first introduced by [31] satisfies this assumption, and its solution is shown in Eq. 6.…”
Section: Data Weighing and Confident Data Labellingmentioning
confidence: 99%
“…The following function (Eq. 5) first introduced by [31] satisfies this assumption, and its solution is shown in Eq. 6.…”
Section: Data Weighing and Confident Data Labellingmentioning
confidence: 99%
“…As our algorithm is based on the so-called "cluster assumption" algorithm which is summarized by [29]: two points are likely to have the same class label if there is a path connecting them that passes through regions of high density only. The following quadratic objective function satisfies this assumption, and it was first introduced by [30]:…”
Section: Stage Three: Dividing Co-training Data Labelingmentioning
confidence: 99%
“…Three criteria are used in this study. The first is accuracy, which evaluates how the model output matches the ground truth label [ 22 ]. Since the output of h is a pair of labels, that is, h ( x i ) = ( y i , z i ), the simple zero-one loss is not suitable in this case.…”
Section: Deep Ensemble Learningmentioning
confidence: 99%