Interspeech 2006 2006
DOI: 10.21437/interspeech.2006-183
|View full text |Cite
|
Sign up to set email alerts
|

Within-class covariance normalization for SVM-based speaker recognition

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
18
0

Year Published

2012
2012
2022
2022

Publication Types

Select...
5
4

Relationship

0
9

Authors

Journals

citations
Cited by 249 publications
(18 citation statements)
references
References 10 publications
0
18
0
Order By: Relevance
“…For phrase-dependent TD-SV, it is preferable to train the embedding extractor on the matched phrases to make the speaker embeddings reflect the phonetic variability in the pre-defined phrases. Also, the parameters in the conventional channel compensation methods-such as within-class covariance normalization (WCCN) [31], LDA, and PLDA-should be trained on the data with the predefined phrases. This strategy helps reject impostors speaking the wrong phrases [32].…”
Section: B Text-dependent Speaker Verificationmentioning
confidence: 99%
“…For phrase-dependent TD-SV, it is preferable to train the embedding extractor on the matched phrases to make the speaker embeddings reflect the phonetic variability in the pre-defined phrases. Also, the parameters in the conventional channel compensation methods-such as within-class covariance normalization (WCCN) [31], LDA, and PLDA-should be trained on the data with the predefined phrases. This strategy helps reject impostors speaking the wrong phrases [32].…”
Section: B Text-dependent Speaker Verificationmentioning
confidence: 99%
“…We followed the standard procedure to pre-process the ivectors for Gaussian PLDA modeling. Specifically, the 500dimensional senone i-vectors were whitened by within-class covariance normalization (WCCN) [40] and length normalization [29], followed by linear discriminant analysis to reduce the dimension to 200 and variance normalization by WCCN [41]. These 200-dimensional i-vectors were input to the PLDA model and the DNNs.…”
Section: Denoised Senone I-vectorsmentioning
confidence: 99%
“…where ω is the global mean of i-vectors, W is a transformation matrix obtained from the Cholesky decomposition of the within-class covariance matrix of ivectors [18] and ω wht is the whitened i-vector. The second step is to apply a simple length-normalization to the whitened i-vectors:…”
Section: Pre-processing For Gaussian Pldamentioning
confidence: 99%
“…It is customary to include linear discriminant analysis (LDA) and within-class covariance normalization (WCCN) [18] in the pre-processing steps. The whole pre-processing can be written in a more succinct fashion:…”
Section: Pre-processing For Gaussian Pldamentioning
confidence: 99%