2018
DOI: 10.1007/s10462-017-9611-1
|View full text |Cite
|
Sign up to set email alerts
|

Selecting training sets for support vector machines: a review

Abstract: Support vector machines (SVMs) are a supervised classifier successfully applied in a plethora of real-life applications. However, they suffer from the important shortcomings of their high time and memory training complexities, which depend on the training set size. This issue is especially challenging nowadays, since the amount of data generated every second becomes tremendously large in many domains. This review provides an extensive survey on existing methods for selecting SVM training data from large datase… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

1
132
0
1

Year Published

2020
2020
2024
2024

Publication Types

Select...
7

Relationship

0
7

Authors

Journals

citations
Cited by 282 publications
(152 citation statements)
references
References 133 publications
(151 reference statements)
1
132
0
1
Order By: Relevance
“…SVMs prove to be successful in nonlinear classification problems by mapping non-separable features into a higher dimensional space, a procedure known as the kernel trick which uses kernel functions such as Radial Basis Function (RBF) or polynomial. 62 In DT approaches, the training set is continuously split according to a chosen feature. A feature tree can be explained by two entities, namely decision nodes and leaves.…”
Section: Ppg Representationsmentioning
confidence: 99%
“…SVMs prove to be successful in nonlinear classification problems by mapping non-separable features into a higher dimensional space, a procedure known as the kernel trick which uses kernel functions such as Radial Basis Function (RBF) or polynomial. 62 In DT approaches, the training set is continuously split according to a chosen feature. A feature tree can be explained by two entities, namely decision nodes and leaves.…”
Section: Ppg Representationsmentioning
confidence: 99%
“…Supervised learning infers classification functions from training labeled data. Major supervised learning approaches are multi-layer perceptron neural networks, decision tree learning, support vector machines and symbolic machine learning algorithms [3]. Unsupervised learning infers functions from the hidden structure of unlabeled data.…”
Section: Introductionmentioning
confidence: 99%
“…Unsupervised learning infers functions from the hidden structure of unlabeled data. The major unsupervised learning approaches are clustering (k-means, mixture models and hierarchical clustering), anomaly detection and neural networking (Hebbian learning and generative adversarial networks) [3]. Semi-supervised learning infers classification functions from a large amount of unlabeled data, together with a small amount of labeled data [4].…”
Section: Introductionmentioning
confidence: 99%
See 2 more Smart Citations