2012
DOI: 10.1007/s10772-012-9142-8
|View full text |Cite
|
Sign up to set email alerts
|

Emotion modeling from speech signal based on wavelet packet transform

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
7
0

Year Published

2013
2013
2023
2023

Publication Types

Select...
5
2
1

Relationship

0
8

Authors

Journals

citations
Cited by 16 publications
(9 citation statements)
references
References 1 publication
0
7
0
Order By: Relevance
“…3. At the beginning the speech signals of all speakers were normalized in reference to the square root of energy, according to the formula (9) Then, for each normalized recording, DFWT were computed, as it was described in Section II. …”
Section: Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…3. At the beginning the speech signals of all speakers were normalized in reference to the square root of energy, according to the formula (9) Then, for each normalized recording, DFWT were computed, as it was described in Section II. …”
Section: Methodsmentioning
confidence: 99%
“…In another approach [8], experiments on applicability of DWT, Wavelet Packets Transform (WPT) and Perceptual Wavelet Packet Transform (PWPT) are performed, where the best results were achieved for the DWT method. Emotion modelling and emotion conversion using transition in subbands of WPT is performed in [9]. The fusion of Fourier and DWT has been investigated in the realm of emotion detection and some results are presented in this paper.…”
Section: Introductionmentioning
confidence: 99%
“…The final design of the model fit to calculate the mean pitch model for Marathi emotional speech calculated as shown in equation ( 7) as below, Mean pitch = Emotion + Gender + (1/speaker) (7) As in equation ( 7), emotion and gender factors are fixed effect variables, and the speaker is a random effect variable for calculating the mean pitch model for Marathi's emotional speech. Table VI shows the impacts of fixed effect variables such as emotions (angry, happiness, fear, and neutral) and gender (male and female) on computing mean pitch values.…”
Section: ) Modeling Mean Pitchmentioning
confidence: 99%
“…Acoustically, the boundaries showed preboundary lengthening and pitch contour slope on the final syllable, and the prominence correlated with maximum F0 and maximum intensity and lesser duration. The authors analyzed MFCC features and energy ratios in [7] to investigate the Marathi emotion recognition for anger and happiness. The authors observed the anger emotion recognition rate higher than happiness and neutral emotions.…”
Section: Introductionmentioning
confidence: 99%
“…for emotion recognition in human interactions. There are many methods for SER, those are Neural Network [7], wavelet Packet Transform [8], Support Vector Machine [9], etc. The existing SER technique have several issues such as a number of unknown features increase the computational complexity [10], noisy data, language-dependent [11], presence of irrelevant data, curse of dimensionality [12].…”
Section: Introductionmentioning
confidence: 99%