2018
DOI: 10.1109/taslp.2018.2793670
|View full text |Cite
|
Sign up to set email alerts
|

Generalizing I-Vector Estimation for Rapid Speaker Recognition

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
6
0

Year Published

2018
2018
2023
2023

Publication Types

Select...
7
2

Relationship

1
8

Authors

Journals

citations
Cited by 17 publications
(6 citation statements)
references
References 27 publications
0
6
0
Order By: Relevance
“…The framework of a speaker recognition system based i-vector is shown in Figure 1. The main problems that need to be processed are finding the full variable space T, and extracting the i-vector, and channel compensation as well as cosine distance scoring [15].…”
Section: The Principle Of I-vectormentioning
confidence: 99%
“…The framework of a speaker recognition system based i-vector is shown in Figure 1. The main problems that need to be processed are finding the full variable space T, and extracting the i-vector, and channel compensation as well as cosine distance scoring [15].…”
Section: The Principle Of I-vectormentioning
confidence: 99%
“…This will lead to the decline of system performance. In order to solve this problem, we utilized WCCN 23 to restrain noise and channel distortion in kernel function space. The covariance matrix of single speaker only reflects the influence of noise.…”
Section: Svm Kernel Based On Bhattacharyya Distance Clustering and Wccnmentioning
confidence: 99%
“…A speaker dependent supervector is formed by using the acoustic features from all of the speaker's training utterances. Note that with SPPCA, h should be used instead of s in (9) and that Q's dimensionality is the same as V 's.…”
Section: I-vector Extractionmentioning
confidence: 99%
“…Previous studies on rapid i-vector extraction have primarily optimized computations in the standard front-end factor analysis (FEFA) approach [1,2] by adopting new computational algorithms, often approximative in nature [6,8,9]. In this study, however, we focus on an alternative and straightforward compression of classic maximum a posteriori (MAP) [3] adapted GMM supervectors with a goal of obtaining fast execution times without compromising on ASV accuracy.…”
Section: Introductionmentioning
confidence: 99%