2016
DOI: 10.1587/transinf.2015edl8258
|View full text |Cite
|
Sign up to set email alerts
|

Spectral Features Based on Local Hu Moments of Gabor Spectrograms for Speech Emotion Recognition

Abstract: SUMMARYTo improve the recognition rate of the speech emotion, new spectral features based on local Hu moments of Gabor spectrograms are proposed, denoted by GSLHu-PCA. Firstly, the logarithmic energy spectrum of the emotional speech is computed. Secondly, the Gabor spectrograms are obtained by convoluting logarithmic energy spectrum with Gabor wavelet. Thirdly, Gabor local Hu moments(GLHu) spectrograms are obtained through block Hu strategy, then discrete cosine transform (DCT) is used to eliminate correlation… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
6
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
6
1

Relationship

1
6

Authors

Journals

citations
Cited by 7 publications
(8 citation statements)
references
References 12 publications
0
6
0
Order By: Relevance
“…Effects of the experiment are conspicuous. On the ABC dataset, our method also outperforms [33], [37], [38], [40] in term of WA. And we report an UA of 57.59% on the ABC dataset, on which outperforms all the four compared works, i.e., 56.1% by [37], 55.5% by [33], 56.11% by [38], 52.26% by [40].…”
Section: Table 3 T-test On Test Resultsmentioning
confidence: 77%
See 1 more Smart Citation
“…Effects of the experiment are conspicuous. On the ABC dataset, our method also outperforms [33], [37], [38], [40] in term of WA. And we report an UA of 57.59% on the ABC dataset, on which outperforms all the four compared works, i.e., 56.1% by [37], 55.5% by [33], 56.11% by [38], 52.26% by [40].…”
Section: Table 3 T-test On Test Resultsmentioning
confidence: 77%
“…On the ABC dataset, our method also outperforms [33], [37], [38], [40] in term of WA. And we report an UA of 57.59% on the ABC dataset, on which outperforms all the four compared works, i.e., 56.1% by [37], 55.5% by [33], 56.11% by [38], 52.26% by [40]. On the EMO-DB dataset, our method also clearly outperforms all the four compared works.…”
Section: Table 3 T-test On Test Resultsmentioning
confidence: 77%
“…1 to 3 summarizes the improvement of the performances of HPCB in terms of UAR to the related peer methods on databases CASIA, EMODB, and SAVEE. Among them, literatures [7], [29][30], [32] used the research results of previous researchers as the baseline, while literature [21] was originally proposed in the research of automatic speech recognition. When researchers in literature [34][35][36][37] applied it to speech emotion recognition, the database used was also inconsistent with the database used in this study.…”
Section: The Performance Of Hpcb and Its Peer Methodsmentioning
confidence: 99%
“…Tables 1-3 summarize the performance improvements of HPCB in terms of UAR with respect to the related peer methods on the databases CASIA, EMODB, and SAVEE. Among them, the authors of [9,[41][42][43][44] used the research results of previous researchers as the baseline, while the study of [45] was originally proposed in the research of automatic speech recognition. When researchers in [46][47][48][49] applied it to speech emotion recognition, the database used was also inconsistent with the database used in this study.…”
Section: The Performance Of Hpcb and Its Peer Methodsmentioning
confidence: 99%
“…GA-BEL [41] 38.55 38.55 HuWSF [42] 43.50 43.50 RDBN [44] 48.50 48.50 PCRN [9] 58.25 58.25 Bi-LSTM [46] / 75.00 Bi-GRU [47] / 72.50 CNN [48] / 76.67 CLDNN [45] / 61.67 CapsNet [49] / 63.33 HPCB (Ours) / 79.67…”
Section: Model War Uarmentioning
confidence: 99%