Handbook of Neural Computation 2017
DOI: 10.1016/b978-0-12-811318-9.00027-2
|View full text |Cite
|
Sign up to set email alerts
|

Support Vector Machine: Principles, Parameters, and Applications

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1

Citation Types

0
97
0
1

Year Published

2019
2019
2024
2024

Publication Types

Select...
7
1
1

Relationship

0
9

Authors

Journals

citations
Cited by 199 publications
(140 citation statements)
references
References 24 publications
0
97
0
1
Order By: Relevance
“…It discriminates the data by a hyperplane that fairly separates two classes. Along with the linear classification, SVM can also perform non-linear classification by mapping the input to n-dimensional space where coordinates represent the value of n (number of features) [10] [24].…”
Section: Support Vector Machine(svm)mentioning
confidence: 99%
“…It discriminates the data by a hyperplane that fairly separates two classes. Along with the linear classification, SVM can also perform non-linear classification by mapping the input to n-dimensional space where coordinates represent the value of n (number of features) [10] [24].…”
Section: Support Vector Machine(svm)mentioning
confidence: 99%
“…Support Vector Machine was introduced after in the 1990s and used for many engineering application [23]. Support Vector Machine is an algorithm developed for binary classification by Cortes & Vapnik.…”
Section: Support Vector Machine Based Classificationmentioning
confidence: 99%
“…In non-linear form classification kernel trick, it mapped the input from low dimensional feature space to high dimension feature space. Support Vector Machine algorithm provides a solution for a limited number of training data in more time and they consume more time for large databases [23]. It is used for text and hypertext categorization, classification of images, image segmentation, and hand-written recognition of character.…”
Section: Support Vector Machine Based Classificationmentioning
confidence: 99%
“…The exact relationships between predictions and outcomes are unknown, hence selecting an optimal kernel function presents a challenge. There are no clear rules for selecting a single optimal kernel, but cross-validation is usually implemented[13, 14,15]. An interesting property of kernels is that a linear com-100 bination of two kernel functions results in another kernel function [11].…”
mentioning
confidence: 99%