2014
DOI: 10.1007/s00500-014-1332-7
|View full text |Cite
|
Sign up to set email alerts
|

A novel SVM by combining kernel principal component analysis and improved chaotic particle swarm optimization for intrusion detection

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
60
0

Year Published

2015
2015
2022
2022

Publication Types

Select...
5
4
1

Relationship

0
10

Authors

Journals

citations
Cited by 115 publications
(60 citation statements)
references
References 21 publications
0
60
0
Order By: Relevance
“…KPCA (Kuang et al, ) performs a nonlinear PCA in the transformed linear space through a kernel function Ψ(boldx,boldy)=(φ(x),φ(y)) that essentially sets up a nonlinear mapping φ(x) from the original space to the feature space to attain the optimum solution. The hth principal component Qtrue(htrue) can be measured via the projection from a point in the feature space: Qtrue(htrue)Tφtrue(xtrue)=true(i=1Dρitrue(htrue)φ(xi),φtrue(xtrue)true),where D is the number of instances in the dataset and ρh is the hth eigenvector.…”
Section: Methodsmentioning
confidence: 99%
“…KPCA (Kuang et al, ) performs a nonlinear PCA in the transformed linear space through a kernel function Ψ(boldx,boldy)=(φ(x),φ(y)) that essentially sets up a nonlinear mapping φ(x) from the original space to the feature space to attain the optimum solution. The hth principal component Qtrue(htrue) can be measured via the projection from a point in the feature space: Qtrue(htrue)Tφtrue(xtrue)=true(i=1Dρitrue(htrue)φ(xi),φtrue(xtrue)true),where D is the number of instances in the dataset and ρh is the hth eigenvector.…”
Section: Methodsmentioning
confidence: 99%
“…SVMs can learn a pattern of large datasets and can scale better because the complexity of the SVM classification does not depend on the dimensions of the features. SVMs also has the ability to dynamically update the training pattern every time there is a new pattern for the classification [14].…”
Section: Support Vector Machinementioning
confidence: 99%
“…By performing this nonlinear mapping, it is expected that the complex distribution of face patterns becomes linearly separable in the kernel feature space. Following the success of applying the kernel trick in support vector machines (SVMs) (Ha et al 2013;Kuang et al 2015), many kernel-based PCA/LDA methods have been developed and applied in pattern recognition tasks, such as kernel PCA (KPCA) (Schölkopf et al 1998), kernel Fisher discriminant (KFD) , generalized discriminant analysis (GDA) (Baudat and Anouar 2000), and kernel direct LDA (KDDA) (Lu et al 2003).…”
Section: Introductionmentioning
confidence: 99%