The platform will undergo maintenance on Sep 14 at about 7:45 AM EST and will be unavailable for approximately 2 hours.
2019
DOI: 10.1109/taffc.2017.2730187
|View full text |Cite
|
Sign up to set email alerts
|

Emotion Classification Using Segmentation of Vowel-Like and Non-Vowel-Like Regions

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
8
0

Year Published

2019
2019
2022
2022

Publication Types

Select...
6
2
1

Relationship

0
9

Authors

Journals

citations
Cited by 35 publications
(9 citation statements)
references
References 44 publications
1
8
0
Order By: Relevance
“…In [8], the authors proposed a new speech feature combined with an SVM classifier and evaluated it using the EMODB and CAISA databases. In [39], the authors proposed feature extraction in both vowel and non-vowel regions with extreme learning machine (ELM), which they evaluated with the EMODB and IEMOCAP databases. In [40], the authors proposed a new speech feature combined with an acoustic mask with a likelihood classifier, and they evaluated it using the EMODB database.…”
Section: Confusion Matrix In Three Databasesmentioning
confidence: 99%
“…In [8], the authors proposed a new speech feature combined with an SVM classifier and evaluated it using the EMODB and CAISA databases. In [39], the authors proposed feature extraction in both vowel and non-vowel regions with extreme learning machine (ELM), which they evaluated with the EMODB and IEMOCAP databases. In [40], the authors proposed a new speech feature combined with an acoustic mask with a likelihood classifier, and they evaluated it using the EMODB database.…”
Section: Confusion Matrix In Three Databasesmentioning
confidence: 99%
“…Two recent works using the audio modality can be found in [53] and [54]. Deb & Dandapat in 2017 proposed a method for speech emotion classification using vowel-like regions (VLRs) and non-vowel-like regions (non-VLRs).…”
Section: Audio Modalitiesmentioning
confidence: 99%
“…The MFCC is a widely used spectral feature to speech emotional recognition, 26 which composes of MFCC, delta MFCC, and delta‐delta MFCC, the total of 39 coefficients.…”
Section: Experiments and Evaluationmentioning
confidence: 99%