2013 IEEE Workshop on Automatic Speech Recognition and Understanding 2013
DOI: 10.1109/asru.2013.6707722
|View full text |Cite
|
Sign up to set email alerts
|

Improved cepstral mean and variance normalization using Bayesian framework

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
16
0

Year Published

2015
2015
2021
2021

Publication Types

Select...
6
3

Relationship

0
9

Authors

Journals

citations
Cited by 33 publications
(16 citation statements)
references
References 8 publications
0
16
0
Order By: Relevance
“…Cepstral Mean and Variance Normalization (CMVN) is a simple and effective feature domain technique to deal with mismatch conditions [1]. Speaker adaptation techniques in model domain have been reported in [2], [3] to compensate speaker mismatch.…”
Section: Introductionmentioning
confidence: 99%
“…Cepstral Mean and Variance Normalization (CMVN) is a simple and effective feature domain technique to deal with mismatch conditions [1]. Speaker adaptation techniques in model domain have been reported in [2], [3] to compensate speaker mismatch.…”
Section: Introductionmentioning
confidence: 99%
“…To this aim, we propose archetypal analysis (AA) [7] based sparse convex sequence kernel (SCSK) for the BAD task. Further, to mitigate channel and environment variations, the extracted short-time features are preprocessed with cepstral mean and variance normalization (CMVN) [10], and short-time feature warping (Gaussianization) techniques [11], which are widely used in context of automatic speaker recognition.…”
Section: Introductionmentioning
confidence: 99%
“…The main aims of using normalization are to reduce the effects of noise, channel, and handset transducers and to alleviate linear and non-linear channel effects. In this study, feature warping (FW) and cepstral mean and variance normalization (CMVN) over a sliding window are used [39,40] to reduce the noise and handset effects and mitigate linear channel effects; this gives improvements and robustness to SIA [6]. The features and feature normalization are as employed in [29].…”
Section: Feature Extraction and Compensationmentioning
confidence: 99%