2004
DOI: 10.1007/978-3-540-30215-5_16
|View full text |Cite
|
Sign up to set email alerts
|

On Kernels, Margins, and Low-Dimensional Mappings

Abstract: Abstract. Kernel functions are typically viewed as providing an implicit mapping of points into a high-dimensional space, with the ability to gain much of the power of that space without incurring a high cost if data is separable in that space by a large margin γ. However, the Johnson-Lindenstrauss lemma suggests that in the presence of a large margin, a kernel function can also be viewed as a mapping to a lowdimensional space, one of dimension onlyÕ(1/γ 2 ). In this paper, we explore the question of whether o… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
12
0

Year Published

2006
2006
2020
2020

Publication Types

Select...
5
1

Relationship

1
5

Authors

Journals

citations
Cited by 18 publications
(12 citation statements)
references
References 11 publications
0
12
0
Order By: Relevance
“…Some of the questions we address can be viewed as a generalization of questions studied in machine learning of what properties of similarity functions (especially kernel functions) are sufficient to allow one to learn well [6,7,21,31,29]. E.g., the usual statement is that if a kernel function satisfies the property that the target function is separable by a large margin in the implicit kernel space, then learning can be done from few labeled examples.…”
Section: Connections To Other Related Workmentioning
confidence: 99%
See 2 more Smart Citations
“…Some of the questions we address can be viewed as a generalization of questions studied in machine learning of what properties of similarity functions (especially kernel functions) are sufficient to allow one to learn well [6,7,21,31,29]. E.g., the usual statement is that if a kernel function satisfies the property that the target function is separable by a large margin in the implicit kernel space, then learning can be done from few labeled examples.…”
Section: Connections To Other Related Workmentioning
confidence: 99%
“…We also give formal relationships between these properties and those considered implicitly by approximation algorithms for standard clustering objectives. We then analyze a much weaker average-attraction property in Section 4 that has close connections to large margin properties studied in Learning Theory [6,7,21,31,29]. This property is not sufficient to produce a hierarchical clustering, however, so we then turn to the question of how weak a property can be and still be sufficient for hierarchical clustering, which leads us to analyze properties motivated by game-theoretic notions of stability in Section 5.…”
Section: Transductive Vs Inductivementioning
confidence: 99%
See 1 more Smart Citation
“…We will show that, from a practical point of view, using the marginal information can actually improve the bounds. 6 Since we know the exact distribution in this case, we might as well compute k exactly by iteratively solving a nonlinear equation:…”
Section: Some Tail Boundsmentioning
confidence: 99%
“…Random projections [1] have been used in machine learning [2][3][4][5][6] and many other applications in data mining and information retrieval, e.g., [7][8][9][10][11][12].…”
Section: Introductionmentioning
confidence: 99%