2009
DOI: 10.1007/978-1-60327-241-4_13
|View full text |Cite
|
Sign up to set email alerts
|

A User’s Guide to Support Vector Machines

Abstract: The Support Vector Machine (SVM) is a widely used classifier. And yet, obtaining the best results with SVMs requires an understanding of their workings and the various ways a user can influence their accuracy. We provide the user with a basic understanding of the theory behind SVMs and focus on their use in practice. We describe the effect of the SVM parameters on the resulting classifier, how to select good values for those parameters, data normalization, factors that affect training time, and software for tr… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

5
316
0
9

Year Published

2014
2014
2023
2023

Publication Types

Select...
8
2

Relationship

0
10

Authors

Journals

citations
Cited by 611 publications
(345 citation statements)
references
References 12 publications
5
316
0
9
Order By: Relevance
“…Finally, an optimal single feature-based classifier was adjusted for each of the predictors, using support vector machines (SVM) with radial basis functions [50]. The classifier was evaluated following a leave one patient out cross-validation (LOPCV) scheme [51,52], where shocks within each patient were classified using the optimal SVM obtained for the rest of the shocks.…”
Section: Discussionmentioning
confidence: 99%
“…Finally, an optimal single feature-based classifier was adjusted for each of the predictors, using support vector machines (SVM) with radial basis functions [50]. The classifier was evaluated following a leave one patient out cross-validation (LOPCV) scheme [51,52], where shocks within each patient were classified using the optimal SVM obtained for the rest of the shocks.…”
Section: Discussionmentioning
confidence: 99%
“…SVM [26] and Random Forest (RF) [27] classifiers are trained using this encoded feature representations over a corpus of exemplar imagery (X-ray image patches, Figure 3). SVM are trained using Table 1.…”
Section: Bag-of-visual-wordsmentioning
confidence: 99%
“…Table 4 Megherbi et al [66,67] present a comparison of classifier-based approaches using volumetric shape characteristics for the classification of pre-segmented 3D objects in cluttered baggage-CT imagery. Various combinations of three shaped-based descriptors (3D Zernike descriptors [79]; Histogram-of-Shape Index (HSI) [29] and a combination of the two) and five classifiers (Support Vector Machines (SVM) [10]; neural networks [105]; decision trees [85]; boosted decision trees [25] and random forests [25]) are compared. Correct classification rates in excess of 98.0% are achieved on a limited dataset using the HSI descriptor with an SVM or random-forest classifier.…”
Section: Classificationmentioning
confidence: 99%