2016
DOI: 10.1155/2016/3057481
|View full text |Cite
|
Sign up to set email alerts
|

Self-Trained LMT for Semisupervised Learning

Abstract: The most important asset of semisupervised classification methods is the use of available unlabeled data combined with a clearly smaller set of labeled examples, so as to increase the classification accuracy compared with the default procedure of supervised methods, which on the other hand use only the labeled data during the training phase. Both the absence of automated mechanisms that produce labeled data and the high cost of needed human effort for completing the procedure of labelization in several scienti… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
13
0

Year Published

2017
2017
2023
2023

Publication Types

Select...
5
3

Relationship

0
8

Authors

Journals

citations
Cited by 28 publications
(13 citation statements)
references
References 40 publications
0
13
0
Order By: Relevance
“…After that, the classifier is retrained with its own most confident predictions, together with initially provided labeled examples. However existing self-training approaches [12,51,32,31] are based on hand-crafted features which are much more limited than the features learned by CNNs. [27] and [44] use CNNs in the self-training framework, but they apply it to relatively simple classification datasets like MNIST [26] and CIFAR-10 [24].…”
Section: Related Workmentioning
confidence: 99%
“…After that, the classifier is retrained with its own most confident predictions, together with initially provided labeled examples. However existing self-training approaches [12,51,32,31] are based on hand-crafted features which are much more limited than the features learned by CNNs. [27] and [44] use CNNs in the self-training framework, but they apply it to relatively simple classification datasets like MNIST [26] and CIFAR-10 [24].…”
Section: Related Workmentioning
confidence: 99%
“…As the first parameter, recent works [ 55 – 57 ] have proposed various predictive models, such as generative models [ 22 , 58 ], low-density separation models [ 59 ], and graph-based models [ 60 ]. For the second parameter, at least two alternatives, namely self-training [ 61 , 62 ] or cotraining [ 63 ], can be applied to assign a label to an unlabeled instance by either one single predictive model or multiple ensemble predictive models. The last parameter concerns with how to handle test instances, where two choices are (i) to manipulate the test instances separately from the unlabeled instances (inductive learning) or (ii) to treat them as unlabeled instances in the training step (transductive learning).…”
Section: Introductionmentioning
confidence: 99%
“…There are several categories that fall under semi-supervised learning algorithms such as Self-Training, Generative Model and Transudative Support Vector Machine. Self-Training is a wrapper algorithm where the labeled data are trained by the classifier and later will classify unlabelled data [38]. The unlabelled data that is associated with the highest confidence score will be added to the training set.…”
Section: Machine Learning Algorithmmentioning
confidence: 99%
“…The unlabelled data that is associated with the highest confidence score will be added to the training set. As stated in [38], self-Training is known as the simplest algorithm in semi-supervised learning but still gives a good solution to the classification problem. Contrary with Self-Training, Generative Model has difficulties in providing good solutions to the classification problem, especially when the unlabelled data is more than labeled data, as it has a problem to balance the effect of unlabelled and labeled data.…”
Section: Machine Learning Algorithmmentioning
confidence: 99%