2010
DOI: 10.1007/978-3-642-12127-2_18
|View full text |Cite
|
Sign up to set email alerts
|

Combining Multiple Kernels by Augmenting the Kernel Matrix

Abstract: In this paper we present a novel approach to combining multiple kernels where the kernels are computed from different information channels. In contrast to traditional methods that learn a linear combination of n kernels of size m × m, resulting in m coefficients in the trained classifier, we propose a method that can learn n × m coefficients. This allows to assign different importance to the information channel per example rather than per kernel. We analyse the proposed kernel combination in empirical feature … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
16
0

Year Published

2011
2011
2015
2015

Publication Types

Select...
3
3
1

Relationship

1
6

Authors

Journals

citations
Cited by 11 publications
(16 citation statements)
references
References 17 publications
(21 reference statements)
0
16
0
Order By: Relevance
“…The primal and its corresponding dual for a linear combination of kernels are derived for various formulations in [1,9,10,19]. In contrast, in AKM [23], given a set of base training kernels (K p ) the augmented kernel is defined as follows:…”
Section: Akm and Its Correspondence To Classifier Fusionmentioning
confidence: 99%
See 1 more Smart Citation
“…The primal and its corresponding dual for a linear combination of kernels are derived for various formulations in [1,9,10,19]. In contrast, in AKM [23], given a set of base training kernels (K p ) the augmented kernel is defined as follows:…”
Section: Akm and Its Correspondence To Classifier Fusionmentioning
confidence: 99%
“…The key idea of MKL, is to learn a linear combination of base kernels by maximizing soft margin between classes [10]. Alternatively, AKM [23] is proposed arguing that in MKL a single kernel corresponding to a particular feature space is attributed a single weight. Therefore, MKL does not exploit information from individual samples in different feature spaces, e.g., in the context of object recognition, some samples can carry more shape information while others may carry more texture information for the same object category.…”
Section: Introductionmentioning
confidence: 99%
“…In essence, MKL is simply a linear combination of different kernels. It implies that the contribution of each kernel is fixed for all the training samples [19]. This seems to be an unnecessary too strong constraint.…”
Section: Related Workmentioning
confidence: 99%
“…This seems to be an unnecessary too strong constraint. In [19], the authors propose to learn augmented coeffi-cients for each sample in each feature channel. They achieve this by augmenting the kernel matrixes.…”
Section: Related Workmentioning
confidence: 99%
“…To deal with this issue, this paper makes use of the notion of empirical feature space [5,6], which preserves the geometrical structure of the original feature space, given that distances and angles in the feature space are uniquely determined by dot products and that the dot products of the corresponding images are the original kernel values. This empirical feature space is Euclidean, so it provides a tractable framework to study the spatial distribution of the mapping function Φ(·) [7], to measure class separability [6] and to optimize the kernel [6,8]. Besides, the notion of empirical kernel feature space has been used for the kernelization of all kinds of linear classifiers [9,10], with the advantage that the algorithm does not need to be formulated to deal with dot products.…”
Section: Introductionmentioning
confidence: 99%