2016 Conference of the Oriental Chapter of International Committee for Coordination and Standardization of Speech Databases And 2016
DOI: 10.1109/icsda.2016.7919003
|View full text |Cite
|
Sign up to set email alerts
|

Local training for PLDA in speaker verification

Abstract: Abstract-PLDA is a popular normalization approach for the i-vector model, and it has delivered state-of-the-art performance in speaker verification. However, PLDA training requires a large amount of labeled development data, which is highly expensive in most cases. A possible approach to mitigate the problem is various unsupervised adaptation methods, which use unlabeled data to adapt the PLDA scattering matrices to the target domain.In this paper, we present a new 'local training' approach that utilizes inacc… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
2
1

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(1 citation statement)
references
References 11 publications
(9 reference statements)
0
1
0
Order By: Relevance
“…In our initial experiment, we compared the performance of our proposed classification technique of GMM-UBM with other modeling techniques. Specifically, the other techniques are the k-NN [29], [78], GMM [21]- [23], [26], [30], and GMM-iVector [79]. The goal is to compare the performance of these different identification methods using the similar datasets (as listed in Table III) and front-end processing, as discussed earlier.…”
Section: ) Comparison To Other Modelsmentioning
confidence: 99%
“…In our initial experiment, we compared the performance of our proposed classification technique of GMM-UBM with other modeling techniques. Specifically, the other techniques are the k-NN [29], [78], GMM [21]- [23], [26], [30], and GMM-iVector [79]. The goal is to compare the performance of these different identification methods using the similar datasets (as listed in Table III) and front-end processing, as discussed earlier.…”
Section: ) Comparison To Other Modelsmentioning
confidence: 99%