2018
DOI: 10.3390/a11090139
|View full text |Cite
|
Sign up to set email alerts
|

An Auto-Adjustable Semi-Supervised Self-Training Algorithm

Abstract: Semi-supervised learning algorithms have become a topic of significant research as an alternative to traditional classification methods which exhibit remarkable performance over labeled data but lack the ability to be applied on large amounts of unlabeled data. In this work, we propose a new semi-supervised learning algorithm that dynamically selects the most promising learner for a classification problem from a pool of classifiers based on a self-training philosophy. Our experimental results illustrate that t… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
18
0

Year Published

2018
2018
2023
2023

Publication Types

Select...
8

Relationship

2
6

Authors

Journals

citations
Cited by 22 publications
(18 citation statements)
references
References 48 publications
0
18
0
Order By: Relevance
“…Furthermore, the base learners utilized in all self-labeled algorithms are the Sequential Minimum Optimization (SMO) [33], the C4.5 decision tree algorithm [34] and the kNN algorithm [35] as in [2,[7][8][9], which probably constitute the most effective and popular machine learning algorithms for classification problems [36].…”
Section: Performance Evaluation Of Wvensl Against Ensemble Self-labelmentioning
confidence: 99%
See 2 more Smart Citations
“…Furthermore, the base learners utilized in all self-labeled algorithms are the Sequential Minimum Optimization (SMO) [33], the C4.5 decision tree algorithm [34] and the kNN algorithm [35] as in [2,[7][8][9], which probably constitute the most effective and popular machine learning algorithms for classification problems [36].…”
Section: Performance Evaluation Of Wvensl Against Ensemble Self-labelmentioning
confidence: 99%
“…The statistical comparison of several classification algorithms over multiple datasets is fundamental in the area of machine learning and it is usually performed by means of a statistical test [2,[7][8][9]. Since our motivation stems from the fact that we are interested in evaluating the rejection of the hypothesis that all the algorithms perform equally well for a given level based on their classification accuracy and highlighting the existence of significant differences between our proposed algorithm and the classical self-labeled algorithms, we utilized the non-parametric Friedman Aligned Ranking (FAR) [37] test.…”
Section: Performance Evaluation Of Wvensl Against Ensemble Self-labelmentioning
confidence: 99%
See 1 more Smart Citation
“…In machine learning, the statistical comparison of several evaluation algorithms over multiple datasets is fundamental and it is frequently performed by means of a statistical test [20,21,56]. Since our motivation stems from the fact that we are interested in evaluating the rejection of the hypothesis that all the algorithms perform equally well for a given level based on their classification accuracy and highlighting the existence of significant differences between our proposed algorithm and the classical self-labeled algorithms, we used the non-parametric Friedman Aligned Ranking (FAR) [57] test.…”
Section: First Phase Of Experimentsmentioning
confidence: 99%
“…Averaging (simple or weighted [13]) and voting (majority, unanimity, plurality, or even weighted votes) are popular and commonly used combination methods [14] depending on the problem which needs to be resolved [10]. Moreover, approaches using committees of base learners into the core of their learning process have also been demonstrated recently, presenting encouraging results [15].…”
Section: Introductionmentioning
confidence: 99%