2021
DOI: 10.1016/j.knosys.2021.107056
|View full text |Cite
|
Sign up to set email alerts
|

SMOTE-NaN-DE: Addressing the noisy and borderline examples problem in imbalanced classification by natural neighbors and differential evolution

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
20
0
1

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
2
1

Relationship

1
6

Authors

Journals

citations
Cited by 53 publications
(27 citation statements)
references
References 35 publications
0
20
0
1
Order By: Relevance
“…Filtering-based oversampling techniques design noise filters, intending to detect and filter out suspicious noise. SMOTE-ENN [28], SMOTE-WENN [29], SMOTE-IPF [18], FRIPS-SMOTE [19] and SMOTE-NaN-DE [10] are competitive instances with the filtering-based idea. The edited nearest neighbor is employed in SMOTE-ENN and SMOTE-WENN to find mislabeled samples regarded as suspicious noise, in which SMOTE is executed to create synthetic samples.…”
Section: Related Workmentioning
confidence: 99%
See 3 more Smart Citations
“…Filtering-based oversampling techniques design noise filters, intending to detect and filter out suspicious noise. SMOTE-ENN [28], SMOTE-WENN [29], SMOTE-IPF [18], FRIPS-SMOTE [19] and SMOTE-NaN-DE [10] are competitive instances with the filtering-based idea. The edited nearest neighbor is employed in SMOTE-ENN and SMOTE-WENN to find mislabeled samples regarded as suspicious noise, in which SMOTE is executed to create synthetic samples.…”
Section: Related Workmentioning
confidence: 99%
“…An improved differential evolution is proposed. Compared with related work [10,30,31], the proposed improved differential evolution is parameter-free and converges faster.…”
Section: Introductionmentioning
confidence: 96%
See 2 more Smart Citations
“…Recently, the interest in using EAs to address machine learning problems is growing fastly [19]- [29]. For imbalanced learning, EAs have been used for data sampling [30], [31] and cost-sensitive learning [32]. Although recent studies address the problem of determining the optimal misclassification costs [32], [33], they have paid little attention to considering the hyper-parameters of the learning algorithm, along with exploiting the hierarchical nature of parameter and hyper-parameter learning to guide search.…”
Section: Introductionmentioning
confidence: 99%