2009
DOI: 10.1109/tasl.2009.2019920
|View full text |Cite
|
Sign up to set email alerts
|

Maximum Penalized Likelihood Kernel Regression for Fast Adaptation

Abstract: Abstract-This paper proposes a nonlinear generalization of the popular maximum-likelihood linear regression (MLLR) adaptation algorithm using kernel methods. The proposed method, called maximum penalized likelihood kernel regression adaptation (MPLKR), applies kernel regression with appropriate regularization to determine the affine model transform in a kernel-induced high-dimensional feature space. Although this is not the first attempt of applying kernel methods to conventional linear adaptation algorithms, … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
14
0

Year Published

2010
2010
2021
2021

Publication Types

Select...
3
3

Relationship

0
6

Authors

Journals

citations
Cited by 10 publications
(14 citation statements)
references
References 24 publications
(60 reference statements)
0
14
0
Order By: Relevance
“…This requires the kernel model, employed by KGC, to be expressed in a maximum likelihood framework: it is a matter for further work. We refer the reader to Mak et al (2009) for a recent attempt to generalize in a nonlinear fashion the maximum-likelihood linear regression.…”
Section: Model Comparisonmentioning
confidence: 99%
“…This requires the kernel model, employed by KGC, to be expressed in a maximum likelihood framework: it is a matter for further work. We refer the reader to Mak et al (2009) for a recent attempt to generalize in a nonlinear fashion the maximum-likelihood linear regression.…”
Section: Model Comparisonmentioning
confidence: 99%
“…Padmanabhan et al combined linear mapping and non-linear mapping to achieve robustness against the data insufficiency [68]. Mak et al applied kernel methods to estimate an affine mapping in a higher dimension space than that of features [55]. In this method, regularization techniques were used to achieve robustness against the data insufficiency.…”
Section: Non-linear Mappingmentioning
confidence: 99%
“…Many state of the art adaptation methods can help compensate for speaker variability, channel variability and content variability. Generally speaking, the model-based adaptation algorithm can be divided into three categories [1], speaker clustering based method which includes eigen-space based methods, Bayesian based method such as maximum a posteriori adaptation, and transformed based methods, such as maximum likelihood linear regression adaptation. These model-based methods need to change the speakerindependent HMM parameters, which can be computationally expensive and requires storing significant amount of data for the adapted speaker-dependent models.…”
Section: Introductionmentioning
confidence: 99%