2018
DOI: 10.1142/s0218213018500124
|View full text |Cite
|
Sign up to set email alerts
|

Optimized Classification Predictions with a New Index Combining Machine Learning Algorithms

Abstract: Voting is a commonly used ensemble method aiming to optimize classification predictions by combining results from individual base classifiers. However, the selection of appropriate classifiers to participate in voting algorithm is currently an open issue. In this study we developed a novel Dissimilarity-Performance (DP) index which incorporates two important criteria for the selection of base classifiers to participate in voting: their differential response in classification (dissimilarity) when combined in tr… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
10
0

Year Published

2019
2019
2022
2022

Publication Types

Select...
7

Relationship

0
7

Authors

Journals

citations
Cited by 10 publications
(10 citation statements)
references
References 84 publications
(99 reference statements)
0
10
0
Order By: Relevance
“…Furxhi et al [72] proposed a composite score based on a Copeland index to rank classifiers according to their performance on diverse datasets, validation stages, and performance metrics. Tamvakis et al [133] proposed a dissimilarity performance index based on their voting performance to recommend the optimal ensemble combination. A variety of different datasets were used in this scenario to evaluate the relationship between voting results and dissimilarity measurements.…”
Section: Ranking Of Classifiersmentioning
confidence: 99%
“…Furxhi et al [72] proposed a composite score based on a Copeland index to rank classifiers according to their performance on diverse datasets, validation stages, and performance metrics. Tamvakis et al [133] proposed a dissimilarity performance index based on their voting performance to recommend the optimal ensemble combination. A variety of different datasets were used in this scenario to evaluate the relationship between voting results and dissimilarity measurements.…”
Section: Ranking Of Classifiersmentioning
confidence: 99%
“…This is because, in most case, an ensemble learning model is more accurate than any single model used separately. The effectiveness of ensemble learning models has been proved in different applications (Tamvakis et al, 2018).…”
Section: Bagging Ensemblementioning
confidence: 99%
“…The most popular classifiers in predicting the toxicity of NPs range from artificial Neural Networks (NN) [1,10], Bayesian Networks (BN) [12,[25][26][27], Quantitative Structure Activity Relationships (QSARs aka nano-QSARs) [18,28,29], Linear Regression (LR), Random Forest (RF) and Support Vector Machines (SVM) [14,30,31]. Recently, integrated approaches (ensemble classifiers) are used to merge results from individual classifiers (base) in order to optimize the predictions [32][33][34][35][36][37]. Voting is a comprehensive ensemble learning method that collect votes from multiple base classifiers to predict the outcome via a voting mechanism to obtain a better predictive performance [38,39].…”
Section: Rfmentioning
confidence: 99%
“…Classifiers are one of the most common ML tools exploiting experimental data such as physicochemical properties, quantum-mechanical attributes and toxicological outputs for nanotoxicity prediction [13,28,41]. Despite the wide variety and selection of classifiers and modelling approaches, no optimal classifier can be identified so far [32,42]. Instead the predictability of the classifier depends on the dataset characteristics (missing values, training size, input variables) or the methods used to assess classifier performance [32].…”
Section: Rfmentioning
confidence: 99%
See 1 more Smart Citation