2010
DOI: 10.1007/s00521-010-0337-0
|View full text |Cite
|
Sign up to set email alerts
|

Feature space versus empirical kernel map and row kernel space in SVMs

Abstract: In machine-learning technologies, the support vector machine (SV machine, SVM) is a brilliant invention with many merits, such as freedom from local minima, the widest possible margins separating different clusters, and a solid theoretical foundation. In this paper, we first explore the linear separability relationships between the highdimensional feature space H and the empirical kernel map U as well as between H and the space of kernel outputs K. Second, we investigate the relations of the distances between … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
4
0

Year Published

2013
2013
2020
2020

Publication Types

Select...
2
2

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(4 citation statements)
references
References 14 publications
0
4
0
Order By: Relevance
“…However, EKM can avoid the above issues and be more flexible. Liang et al [33] demonstrate that the separating hyperplane based on EKM is manipulable in the feature space. Further, Xiong et al [56] derive an effective kernel optimization algorithm through employing a datadependent kernel.…”
Section: Related Workmentioning
confidence: 99%
“…However, EKM can avoid the above issues and be more flexible. Liang et al [33] demonstrate that the separating hyperplane based on EKM is manipulable in the feature space. Further, Xiong et al [56] derive an effective kernel optimization algorithm through employing a datadependent kernel.…”
Section: Related Workmentioning
confidence: 99%
“…Generally speaking, the dimension of original feature space is very high, sometimes infinite, but the dimension of empirical feature space is number of training instance at most [11] . So it is reasonable to process data in empirical feature space from point of theory and practice.…”
Section: Feature Mappingmentioning
confidence: 99%
“…The first layer through the primary kernel attempts to efficiently cluster the data, whereas the second one derives linear boundaries for class-separation of the clustered data. Furthermore, in (Liang 2010) the linear separability of samples in the SVM hidden space is examined and multiple parameterizations of polynomial second stage kernels are evaluated. The findings indicate that the linear decision plane in the hidden space might not be adequately optimal and thus more complex decision mapping may be needed.…”
Section: Introductionmentioning
confidence: 99%
“…Such mappings are used in DNA analysis for feature reduction (Sammon maps), and lately in social-network analysis to measure cluster compactness and affinity of single points to neighboring clusters. In view of its utility as a distance mapping, the two-layer kernel approach presents specific peculiarities that limit the predictive power of HS-SVMs compared to SVMs, as reported in (Liang 2010) (Liang 2010). More specifically, the first layer aims to cluster the input data in the augmented space (often denoted by φ(x)), by means of the distances among the samples defined by corresponding kernel.…”
Section: Introductionmentioning
confidence: 99%