2021
DOI: 10.1109/access.2020.3047741
|View full text |Cite
|
Sign up to set email alerts
|

An Improved MAHAKIL Oversampling Method for Imbalanced Dataset Classification

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
34
0
3

Year Published

2021
2021
2024
2024

Publication Types

Select...
6
1
1

Relationship

0
8

Authors

Journals

citations
Cited by 33 publications
(41 citation statements)
references
References 23 publications
0
34
0
3
Order By: Relevance
“…In over-sampling methods, new samples are created based on samples from the minority class to reach a more balanced class distribution of samples while strengthening class boundaries [40,41]. However, over-sampling may lead to overfitting because it duplicates o r s y nt hesis es a minority of samples [42]. As the number of samples increases, the training time also increases [43].…”
Section: ) Data-level Methodsmentioning
confidence: 99%
“…In over-sampling methods, new samples are created based on samples from the minority class to reach a more balanced class distribution of samples while strengthening class boundaries [40,41]. However, over-sampling may lead to overfitting because it duplicates o r s y nt hesis es a minority of samples [42]. As the number of samples increases, the training time also increases [43].…”
Section: ) Data-level Methodsmentioning
confidence: 99%
“…Solutions proposed in previous literature can be generally divided into three categories (Krawczyk, 2016): (1) Data-level methods that employ undersampling or over-sampling technique to balance the class distributions (Barua et al, 2014;Smith et al, 2014;Sobhani et al, 2014;Zheng et al, 2015).…”
Section: Introductionmentioning
confidence: 99%
“…Experiments on synthetic data showed that FRIPS-SMOTE-FRBPS outperforms state-of-the-art methods such as SMOTE and its various modifications. Zheng et al [38] proposed a new oversampling approach SNOCC that can compensate the defects of SMOTE. In this proposed SNOCC, the authors increased the number of seed samples to rule out the new samples from the line segment between two seed samples in SMOTE.…”
Section: Related Workmentioning
confidence: 99%
“…We use a Laplace estimator to calculate the prior probability. The Laplace estimator shows excellent performance in Naive-Bayes classification algorithm [38,51]. One extra benefit of using Laplace estimator is that zero probability can be avoided.…”
Section: Case 2: Using F-measure Performance Metricmentioning
confidence: 99%
See 1 more Smart Citation