2017
DOI: 10.1109/tcyb.2016.2565683
|View full text |Cite
|
Sign up to set email alerts
|

Implementing Kernel Methods Incrementally by Incremental Nonlinear Projection Trick

Abstract: Recently, the nonlinear projection trick (NPT) was introduced enabling direct computation of coordinates of samples in a reproducing kernel Hilbert space. With NPT, any machine learning algorithm can be extended to a kernel version without relying on the so called kernel trick. However, NPT is inherently difficult to be implemented incrementally because an ever increasing kernel matrix should be treated as additional training samples are introduced. In this paper, an incremental version of the NPT (INPT) is pr… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
10
0

Year Published

2017
2017
2022
2022

Publication Types

Select...
5
2

Relationship

0
7

Authors

Journals

citations
Cited by 9 publications
(10 citation statements)
references
References 16 publications
(15 reference statements)
0
10
0
Order By: Relevance
“…We used the RBF kernel function and set the value of σ equal to the mean pair-wise distance value between the positive training vectors. For the small datasets, we used the method in [24] by keeping the eigenvectors corresponding to all positive eigenvalues, while for the large datasets we used the method in [11] by setting the dimensionality of the resulted kernel subspace to L = 1000. On each class-specific problem, we ran five experiments by randomly selecting 70% of the positive and negative classes for training and the rest 30% for testing and measure the average performance value.…”
Section: Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…We used the RBF kernel function and set the value of σ equal to the mean pair-wise distance value between the positive training vectors. For the small datasets, we used the method in [24] by keeping the eigenvectors corresponding to all positive eigenvalues, while for the large datasets we used the method in [11] by setting the dimensionality of the resulted kernel subspace to L = 1000. On each class-specific problem, we ran five experiments by randomly selecting 70% of the positive and negative classes for training and the rest 30% for testing and measure the average performance value.…”
Section: Methodsmentioning
confidence: 99%
“…This is achieved by using Φ = Σ 1 2 U T , where U and Σ contain the eigenvectors and eigenvavlues of the kernel matrix K ∈ R N ×N [24]. Thus, extension of PCSDA to the non-linear (kernel) case can be readily obtained by applying the abovedescribed linear PCSDA on the vectors φ i , i = 1, .…”
Section: F Non-linear Pcsdamentioning
confidence: 99%
“…In the case where K is centered in F, R N is the space defined by kernel Principal Component Analysis (kPCA) [2]. Moreover, as has been shown in [9], [10], the kernel matrix needs not to be centered. In the latter case, N is called the effective dimensionality of F and R N is the corresponding effective subspace of F. This is essentially the same as the uncentered kernel PCA.…”
Section: Preliminaries Let Us Denote Bymentioning
confidence: 99%
“…where e ∈ R N is a vector having all its elements equal to 1/N . Combining (10) with ( 6) we obtain:…”
Section: A Cmvca Preserves the Class Means To Total Mean Distancesmentioning
confidence: 99%
“…One of the main drawbacks of the subspace learning methods lies in the low speed for high-dimensional data and large datasets. For speeding up the training process several approaches have been proposed, including approximate solutions [1], incremental learning [10], and speed-up solutions [11,12,13,14,15]. In this paper, we propose a speed-up approach for SDA and its kernelized form, i.e., Kernel Subclass Discriminant Analysis (KSDA) [16].…”
Section: Introductionmentioning
confidence: 99%