The platform will undergo maintenance on Sep 14 at about 7:45 AM EST and will be unavailable for approximately 2 hours.
Interspeech 2019 2019
DOI: 10.21437/interspeech.2019-2034
|View full text |Cite
|
Sign up to set email alerts
|

Design and Development of a Multi-Lingual Speech Corpora (TaMaR-EmoDB) for Emotion Analysis

Abstract: This paper presents the design, the development of a new multilingual emotional speech corpus, TaMaR-EmoDB (Tamil Malayalam Ravula-Emotion DataBase) and its evaluation using a deep neural network (DNN)-baseline system. The corpus consists of utterances from three languages, namely, Malayalam, Tamil and Ravula, a tribal language. The database consists of short speech utterances in four emotions-anger, anxiety, happiness, and sadness, along with neutral utterances. The subset of the corpus is first evaluated usi… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
5
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
1
1

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(5 citation statements)
references
References 10 publications
0
5
0
Order By: Relevance
“…This is consistent with the results of perception tests for adult emotion speech. For example, Rajan et al [13] reported comparable From Figure 2 and Table 4 we can see that experts recognize all emotions noticeably above chance (0.25 for 4-class classification). This is consistent with the results of perception tests for adult emotion speech.…”
Section: Results Of the Subjective Evaluation Of Emotional Speech Rec...mentioning
confidence: 77%
See 4 more Smart Citations
“…This is consistent with the results of perception tests for adult emotion speech. For example, Rajan et al [13] reported comparable From Figure 2 and Table 4 we can see that experts recognize all emotions noticeably above chance (0.25 for 4-class classification). This is consistent with the results of perception tests for adult emotion speech.…”
Section: Results Of the Subjective Evaluation Of Emotional Speech Rec...mentioning
confidence: 77%
“…For example, Sowmya and Rajeswari [75] reported that they achieved an overall accuracy of 0.85 for automatic children's speech emotion recognition in the Tamil language with an SVM classifier on prosodic (energy) and spectral (MFCC) features. Rajan et al [13] reported that they achieved an Average Recall of 0.61 and Average Precision of 0.60 in the Tamil language using a DNN framework, also on prosodic and spectral features.…”
Section: Results Of Automatic Emotion Recognition On Extended Feature...mentioning
confidence: 99%
See 3 more Smart Citations