2018 International Joint Conference on Neural Networks (IJCNN) 2018
DOI: 10.1109/ijcnn.2018.8489504
|View full text |Cite
|
Sign up to set email alerts
|

Deep MLPs for Imbalanced Classification

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
9
0
1

Year Published

2020
2020
2023
2023

Publication Types

Select...
3
2
1

Relationship

0
6

Authors

Journals

citations
Cited by 12 publications
(10 citation statements)
references
References 18 publications
0
9
0
1
Order By: Relevance
“…CLEMS (Huang and Lin, 2017) introduces a cost-sensitive label embedding technique that takes the cost function of interest into account. CS-DMLP (Díaz-Vico et al, 2018) is a deep multi-layer percetron model utilizing cost-sensitive learning to regularize the posterior probability distribution predicted for a given sample. This type of methods normally requires domain knowledge to define the actual cost value, which is often hard in real-world scenarios (Krawczyk, 2016).…”
Section: Algorithm-level Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…CLEMS (Huang and Lin, 2017) introduces a cost-sensitive label embedding technique that takes the cost function of interest into account. CS-DMLP (Díaz-Vico et al, 2018) is a deep multi-layer percetron model utilizing cost-sensitive learning to regularize the posterior probability distribution predicted for a given sample. This type of methods normally requires domain knowledge to define the actual cost value, which is often hard in real-world scenarios (Krawczyk, 2016).…”
Section: Algorithm-level Methodsmentioning
confidence: 99%
“…The most popular branch is the costsensitive algorithms, which assign a higher cost on misclassifying the minority class instances. (Díaz-Vico et al, 2018). (3) Ensemble-based methods that combine advantages of data-level and algorithmlevel methods by merging data-level solutions with classifier ensembles, resulting in robust and efficient learners (Galar et al, 2012;Wang et al, 2015).…”
Section: Introductionmentioning
confidence: 99%
“…SCUT is a hybrid sampling method proposed by Agarwal et al [26] for balancing the examples in multiclass data sets. The minority classes are oversampled generating synthetic examples with SMOTE while ( ) SCUT, SMOTE and cluster-based undersampling [26] MDO, Mahalanobis based oversampling [27,28,29] ( ) ( ) SMOM, synthetic oversampling for multiclass [30] ( ) ( ) Hellinger distance decision trees [31] ( ) Dynamic sampling for multilayer perceptrons [32] Deep MLPs for imbalance [33] AdaC2.M1, cost-sensitive boosting [34] Cost sensitive OVO ensemble [35] Cost-Sensitive neural networks with binarization [36] OVA with hybrid sampling [9] OVO fuzzy rough set [37] Binarization with over/undersampling [38] ( ) ( ) ( ) ( ) Instance weighting (cost-sensitive) [38] ( ) ( ) UnderBagging [39,17] ( ) SMOTEBagging [40,17] ( ) RUSBoost [41,17] ( ) SMOTEBoost [42,17] ( ) SMOTE+AdaBoost [17] ( ) EasyEnsemble [43,17,44] ( ) Binarization with boosting and oversampling [45] Diversified ECOC [46] RAMOBoost [47,32] AdaBoost.NC [48,17,38,44] ( ) ( ) ( ) ( ) Probability threshold Bagging [49] Dynamic ensemble selection [44] Multiclass Roughly Balanced Bagging [50,49] ( ) ( ) the m...…”
Section: Multiclass Imbalanced Classificationmentioning
confidence: 99%
“…Sáez et al [22] found that oversampling benefits from distinguishing between four example Algorithm level. Publications reporting methods adapted to multiclass imbalance are [31], [32] and [33].…”
Section: Multiclass Imbalanced Classificationmentioning
confidence: 99%
See 1 more Smart Citation