2017
DOI: 10.1016/j.asoc.2017.08.023
|View full text |Cite
|
Sign up to set email alerts
|

Handling binary classification problems with a priority class by using Support Vector Machines

Abstract: © 2017 Elsevier B.V. A post-processing technique for Support Vector Machine (SVM) algorithms for binary classification problems is introduced in order to obtain adequate accuracy on a priority class (labelled as a positive class). That is, the true positive rate (or recall or sensitivity) is prioritized over the accuracy of the overall classifier. Hence, false negative (or Type I) errors receive greater consideration than false positive (Type II) errors during the construction of the model. This post-processin… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
11
0

Year Published

2018
2018
2023
2023

Publication Types

Select...
8
1

Relationship

0
9

Authors

Journals

citations
Cited by 18 publications
(11 citation statements)
references
References 23 publications
0
11
0
Order By: Relevance
“…For regression problems, the functions of the kernel are often to be used to predict resulting outcome, such as radial basis function (RBF), two neural networks, polynomial, sigmoid and linear, exponential radial basis function (ERBF) [55,56]. In recent years, SVM has been applied in many fields as well as publications, therefore, the details of the SVM are not presented in this study but can be found in [57][58][59][60][61][62][63].…”
Section: Svmmentioning
confidence: 99%
“…For regression problems, the functions of the kernel are often to be used to predict resulting outcome, such as radial basis function (RBF), two neural networks, polynomial, sigmoid and linear, exponential radial basis function (ERBF) [55,56]. In recent years, SVM has been applied in many fields as well as publications, therefore, the details of the SVM are not presented in this study but can be found in [57][58][59][60][61][62][63].…”
Section: Svmmentioning
confidence: 99%
“…Since, in the dataset, the number of instances in every class remains imbalanced (see Table 1), the use of accuracy or precision as the main performance metric can imply a significant skew (Chawla, 2005). It is therefore preferred to use sensitivity and specificity since these remain unbiased metrics even when the classes are imbalanced (Gonzalez-Abril et al, 2014;Gonzalez-Abril et al, 2017). Therefore, when a single metric is required for the comparison of classifier results (i.e.…”
Section: Classification Performance Metricsmentioning
confidence: 99%
“…Back-propagation neural network and SVM algorithms have been widely used for classification, identification, prediction, and detection that produce a fairly good degree of accuracy. The applications of back-propagation neural network and SVM for classification are the fruit classification, ship classification, natural gas pipeline classification, automatic text classification, cancer classification, audio sounds classification, handling binary classification, enzyme classification and object classification [6][7][8][9][10][11][12][13][14][15]. The applications of the two classifiers for identification are defect identification for simple fleshy fruits, hand writer character recognition, transcription factor binding sites identification on human genome, diagnosis of renal calculus disease, and automated speech signal analysis [16][17][18][19][20].…”
Section: Introductionmentioning
confidence: 99%