2021
DOI: 10.1109/tsmc.2020.2982226
|View full text |Cite
|
Sign up to set email alerts
|

AUC-Based Extreme Learning Machines for Supervised and Semi-Supervised Imbalanced Classification

Abstract: Extreme learning machines (ELM) has been theoretically and experimentally proved to achieve promising performance at a fast learning speed for supervised classification tasks. However, it does not perform well on imbalanced binary classification tasks and tends to get biased towards the majority class. Besides, since a large amount of training data with labels are not always available in the real world, there is an urgent demand to develop an efficient semi-supervised version of ELM for imbalanced binary class… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
5
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
8
2

Relationship

0
10

Authors

Journals

citations
Cited by 34 publications
(5 citation statements)
references
References 47 publications
0
5
0
Order By: Relevance
“…To measure the performance of the proposed model, several evaluation metrics were used such as precision ( Miao & Zhu, 2021 ), recall ( Tharwat, 2020 ), F-measure ( Soleymani, Granger & Fumera, 2020 ), accuracy ( Dinga et al, 2019 ), and Area Under Curve (AUC) ( Wang, Wong & Lu, 2020 ; Kabir & Ludwig, 2019b ). These were derived from the confusion matrix ( Xu, Zhang & Miao, 2020 ; Markoulidakis et al, 2021 ) to calculate different evaluations for the proposed model.…”
Section: Methodsmentioning
confidence: 99%
“…To measure the performance of the proposed model, several evaluation metrics were used such as precision ( Miao & Zhu, 2021 ), recall ( Tharwat, 2020 ), F-measure ( Soleymani, Granger & Fumera, 2020 ), accuracy ( Dinga et al, 2019 ), and Area Under Curve (AUC) ( Wang, Wong & Lu, 2020 ; Kabir & Ludwig, 2019b ). These were derived from the confusion matrix ( Xu, Zhang & Miao, 2020 ; Markoulidakis et al, 2021 ) to calculate different evaluations for the proposed model.…”
Section: Methodsmentioning
confidence: 99%
“…According to [54], a high AUC value results in a model robust against class imbalance. Furthermore, the F1 score can work well when data are unbalanced since it represents the harmonic mean of precision and TPR.…”
Section: Metrics Used To Evaluate Classification Performancementioning
confidence: 99%
“…Taking into consideration the comparative analysis from Table 2 for supervised machine learning classifiers, four machine learning classifiers viz. Random Forest (RF), Support Vector Machine(SVM), Naïve Bayes (NB) and Logistic regression(LR) [13][14][15][16][17] were used for supervised classification using Machine Learning [2,18,19] on the above datasets. The train test method with Stratified Crossfold with k=10 strategy was used for classifier experimentation.…”
Section: Datasets Usedmentioning
confidence: 99%