2017
DOI: 10.1016/j.csda.2017.01.005
|View full text |Cite
|
Sign up to set email alerts
|

RHSBoost: Improving classification performance in imbalance data

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
38
0
1

Year Published

2018
2018
2023
2023

Publication Types

Select...
8
2

Relationship

0
10

Authors

Journals

citations
Cited by 98 publications
(41 citation statements)
references
References 11 publications
0
38
0
1
Order By: Relevance
“…Receiver operating characteristics (ROC), G-mean, and F-measure (FM) are preferred as the likelihood these measures will be affected by imbalanced class distributions is low [11]. In a binary classification problem with good and bad class labels, the classifier's result is considered successful if both the false positive rate and false negative rate are small [15]. Sensitivity measures the accuracy of positive samples and specificity measures negative samples.…”
Section: Performance Measurementmentioning
confidence: 99%
“…Receiver operating characteristics (ROC), G-mean, and F-measure (FM) are preferred as the likelihood these measures will be affected by imbalanced class distributions is low [11]. In a binary classification problem with good and bad class labels, the classifier's result is considered successful if both the false positive rate and false negative rate are small [15]. Sensitivity measures the accuracy of positive samples and specificity measures negative samples.…”
Section: Performance Measurementmentioning
confidence: 99%
“…Wang et al [8] used a K-labelsets ensemble method based on mutual information and joint entropy to deal with inblanced data. Gong et al [9] presented a ensemble method using random undersampling and ROSE sampling to solve the imbalance classification problem. So when we face the data imbalance problem, it's a good choice to determine data distribution or data label imbalance, and then apply these corresponding methods.…”
Section: Data Imbalancementioning
confidence: 99%
“…The analysis of SMOTEBagging with logistic regression using credit scoring data revealed its higher degree of accuracy compared to a simple logistic algorithm (Hanifah et al, 2015). A new ensemble classification method using random undersampling and ROSE sampling under a boosting scheme RHSBoost was proposed to address the imbalance classification problem (Gong & Kim, 2017). A study proposed the variants of SMOTEBoost for imbalanced regression task and evaluated its performance using 30 datasets (Moniz et al, 2018).…”
Section: Related Workmentioning
confidence: 99%