2007
DOI: 10.1109/tnn.2006.883722
|View full text |Cite
|
Sign up to set email alerts
|

Reduced Support Vector Machines: A Statistical Theory

Abstract: In dealing with large data sets, the reduced support vector machine (RSVM) was proposed for the practical objective to overcome some computational difficulties as well as to reduce the model complexity. In this paper, we study the RSVM from the viewpoint of sampling design, its robustness, and the spectral analysis of the reduced kernel. We consider the nonlinear separating surface as a mixture of kernels. Instead of a full model, the RSVM uses a reduced mixture with kernels sampled from certain candidate set.… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

1
67
0
1

Year Published

2008
2008
2020
2020

Publication Types

Select...
5
3

Relationship

1
7

Authors

Journals

citations
Cited by 205 publications
(73 citation statements)
references
References 28 publications
1
67
0
1
Order By: Relevance
“…The random choice of B holds the key to our privacy-preserving approximation and has been used effectively in SVM classification problems [14]. Computational results have shown that there is no essential difference between using a random B or a random submatrix ofĀ of the rows of A in reduced SVMs [9,8].…”
Section: Privacy-preserving Linear Kernel Approximationmentioning
confidence: 99%
See 1 more Smart Citation
“…The random choice of B holds the key to our privacy-preserving approximation and has been used effectively in SVM classification problems [14]. Computational results have shown that there is no essential difference between using a random B or a random submatrix ofĀ of the rows of A in reduced SVMs [9,8].…”
Section: Privacy-preserving Linear Kernel Approximationmentioning
confidence: 99%
“…For a given data matrix A ∈ R m×n , instead of using the usual kernel function K(A, A ′ ) : R m×n × R n×m −→ R m×m for constructing a linear or nonlinear approximation of a given y ∈ R m corresponding to the m rows of A, we use a random kernel [9,8] K(A, B ′ ) : R m×n × R n×m −→ R m×m ,m < n, where B is a completely random matrix that is publicly disclosed. Such a random kernel will be shown to completely hide the data matrix A.…”
Section: Introductionmentioning
confidence: 99%
“…Method of searching for the optimal solutions formed by imitating the process of natural evolution, genetic algorithm (GA) has strong robustness and global search ability of [14] in the optimization problem. In order to give full play to the unique advantages of genetic algorithm in encoding operation, this paper selected in the individual, in accordance with the appropriate standards to the classifier parameters and feature subset combination form of the proper characterization, also need to meet the maximum two optimization criteria are the authentication function classifiers have, so this paper using genetic algorithm to realize the joint optimization of feature selection and classifier parameter.…”
Section: Classifier Parameter Optimization and Featurementioning
confidence: 99%
“…Basically RSVM is used from the view point of sampling design, its robustness, and the spectral analysis of the reduced kernel [2].…”
Section: Rsvmmentioning
confidence: 99%
“…We call it weight parameter. We note that the nonnegative constraint ϵ≥0 can be removed because of the term ∥ ∥ 2 2 in the objective function. We use an m×m diagonal matrix D, where D ii =y i ϵ {-1, 1}, to specify the corresponding class membership of each input point.…”
Section: Rsvm Formulationmentioning
confidence: 99%