“…More specifically, we tested the following methods: Classification tree (CT) (CART algorithm [4,5,14,25,45]), k-nearest neighbor classifier (k-NN) [10,15,45], Linear discriminant analysis (LDA) [2,8], Logistic regression (LR) [12,22], Least-Squares Support Vector Machine (LS-SVM) [38,39], Mahalanobis discriminant analysis (MDA) [6], naïve Bayes (NB) variants [33,34,45], Quadratic discriminant analysis (QDA) [8,21] and Random Forests (RF) [7,23]. These algorithms were selected because they have shown great performance in many applications [20,40,42,45] and they extend our earlied study [30] significantly. Naïve Bayes method was tested with and without kernel density estimation (KDE) [18].…”