2016 4th International Conference on Control Engineering &Amp; Information Technology (CEIT) 2016
DOI: 10.1109/ceit.2016.7929127
|View full text |Cite
|
Sign up to set email alerts
|

Emotional speaker recognition based on i-vector space model

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2018
2018
2023
2023

Publication Types

Select...
2
1

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(1 citation statement)
references
References 7 publications
0
1
0
Order By: Relevance
“…The first step was the preprocessing of the audio signal corrupted by noise and room reverberations using binary time-frequency (T-F) masking algorithm, using a CASA approach, via a deep neural network classifier. Mansour et al [17] employed the i-vector approach along with the Support Vector Machine (SVM) classifier as an attempt to boost and enhance the deteriorated performance of speaker recognition under emotional auditory environments. Results showed that the i-vector algorithm resolves the problem of training algorithm complexity that the SVM model suffers from and shows promising results in increasing speaker recognition performance in an emotional context.…”
Section: Introduction and Literature Reviewmentioning
confidence: 99%
“…The first step was the preprocessing of the audio signal corrupted by noise and room reverberations using binary time-frequency (T-F) masking algorithm, using a CASA approach, via a deep neural network classifier. Mansour et al [17] employed the i-vector approach along with the Support Vector Machine (SVM) classifier as an attempt to boost and enhance the deteriorated performance of speaker recognition under emotional auditory environments. Results showed that the i-vector algorithm resolves the problem of training algorithm complexity that the SVM model suffers from and shows promising results in increasing speaker recognition performance in an emotional context.…”
Section: Introduction and Literature Reviewmentioning
confidence: 99%