Our system is currently under heavy load due to increased usage. We're actively working on upgrades to improve performance. Thank you for your patience.
2008
DOI: 10.1109/icassp.2008.4517926
|View full text |Cite
|
Sign up to set email alerts
|

Multiple kernel learning for speaker verification

Abstract: Many speaker verification (SV) systems combine multiple classifiers using score-fusion to improve system performance. For SVM classifiers, an alternative strategy is to combine at the kernel level. This involves finding a suitable kernel weighting, known as Multiple Kernel Learning (MKL). Recently, an efficient maximum-margin scheme for MKL has been proposed. This work examines several refinements to this scheme for SV. The standard scheme has a known tendency towards sparse weightings, which may not be optima… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
8
0

Year Published

2009
2009
2017
2017

Publication Types

Select...
3
2
2

Relationship

0
7

Authors

Journals

citations
Cited by 15 publications
(8 citation statements)
references
References 8 publications
0
8
0
Order By: Relevance
“…2, middle panel). Longworth [18] combines various dynamic kernels, including derivative kernel and parametric kernels, using MKL for speaker verification. In recent image recognition research, however, the MKL approach begun to be used for the purpose of feature vector selection or weighting.…”
Section: Feature Selection and Classification Using Mkl-svmmentioning
confidence: 99%
“…2, middle panel). Longworth [18] combines various dynamic kernels, including derivative kernel and parametric kernels, using MKL for speaker verification. In recent image recognition research, however, the MKL approach begun to be used for the purpose of feature vector selection or weighting.…”
Section: Feature Selection and Classification Using Mkl-svmmentioning
confidence: 99%
“…where 0 < α (s t ) ≤ 1 denotes the learning step size and ∂û (s t )/∂σ (s t ) can be calculated from (32). Under the framework of MKL, the MMLC algorithm is proposed as an improved approach of RL in two aspects.…”
Section: The Mmlc Algorithmmentioning
confidence: 99%
“…The existing MKL algorithms can be divided into five major groups [26]: fixed rules, heuristic approaches, optimization approaches, Bayesian approaches and boosting approaches. In recent literature, MKL has been widely applied in many fields, such as visual object recognition [30], visual search [31], speaker verification [32], dimensionality reduction [33], structured prediction [34] and braincomputer interfacing [35].…”
Section: Introductionmentioning
confidence: 99%
“…Clearly, if the chosen kernel is a linear transformation of concatenation x nÀ1 ¼ fx i nÀ1 ; x iÀ1 nÀ1 g, then the solution is equivalent to using the common KT, which is a particular case of the proposed machine. The so-called summation kernel (SK) has been extensively used in the literature as a straightforward way of fusing heterogeneous information [9,8].…”
Section: Article In Pressmentioning
confidence: 99%
“…Differences with previous composite kernels: Composite kernels have been paid attention in the kernel methods literature [6,15,21,8]. Note, however, that our emphasis is on the difference between applying the KT and developing specific signal-based kernels.…”
Section: Article In Pressmentioning
confidence: 99%