Interspeech 2017 2017
DOI: 10.21437/interspeech.2017-803
|View full text |Cite
|
Sign up to set email alerts
|

Analysis of Score Normalization in Multilingual Speaker Recognition

Abstract: NIST Speaker Recognition Evaluation 2016 has revealed the importance of score normalization for mismatched data conditions. This paper analyzes several score normalization techniques for test conditions with multiple languages. The best performing one for a PLDA classifier is an adaptive s-norm with 30% relative improvement over the system without any score normalization. The analysis shows that the adaptive score normalization (using top scoring files per trial) selects cohorts that in 68% contain recordings … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
50
0

Year Published

2017
2017
2024
2024

Publication Types

Select...
4
2
2

Relationship

3
5

Authors

Journals

citations
Cited by 88 publications
(50 citation statements)
references
References 19 publications
0
50
0
Order By: Relevance
“…In fact all of the BUT subsystems presented here contain score normalization. We have dedicated the whole paper [19] to study the effects of score normalization. We also encourage the interested reader to study the theory behind score normalization that was partly inspired by SRE'16 in [20].…”
Section: Resultsmentioning
confidence: 99%
See 1 more Smart Citation
“…In fact all of the BUT subsystems presented here contain score normalization. We have dedicated the whole paper [19] to study the effects of score normalization. We also encourage the interested reader to study the theory behind score normalization that was partly inspired by SRE'16 in [20].…”
Section: Resultsmentioning
confidence: 99%
“…SRE'16 brought a completely new non-English dataset and a tough challenge in the domain adaptation. It revealed the weak side of current BNF that are tuned for English, brought back the issue of score normalization [19,20] and in general significantly increased the difficulty which will undoubtedly inspire a lot of research.…”
Section: Introductionmentioning
confidence: 99%
“…There is a consistent improvement over the systems without s-norm (15)(16)(17). Fusion of these three systems (18)(19)(20) form our primary submission (system 23) to the fixed condition. We have also run a postevaluation fusion with the same systems without s-norm (15)(16)(17) which is show in row 24.…”
Section: Results and Analysismentioning
confidence: 78%
“…We used adaptive symmetric score normalization (adapt Snorm) which computes an average of normalized scores from Z-norm and T-norm [14,20]. In adaptive version [20,21,22], only part of the cohort is selected to compute mean and variance for normalization. Usually X top scoring or most similar files are selected, and we set X to be 400 for all experiments.…”
Section: Score Normalizationmentioning
confidence: 99%
“…This approach can be further combined with domain adaptation (Glembek et al, 2014) which requires having certain amount of usually unsupervised target data. In the very last stage of the system, SV outputs can be adjusted per-trial basis via various kinds of adaptive score normalization (Sturim and Reynolds, 2005;Matějka et al, 2017;Swart and Brümmer, 2017).…”
Section: Introductionmentioning
confidence: 99%