2015
DOI: 10.1016/j.procs.2015.02.112
|View full text |Cite
|
Sign up to set email alerts
|

Hybrid Approach for Emotion Classification of Audio Conversation Based on Text and Speech Mining

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
32
0

Year Published

2017
2017
2021
2021

Publication Types

Select...
5
2
1

Relationship

0
8

Authors

Journals

citations
Cited by 75 publications
(33 citation statements)
references
References 6 publications
1
32
0
Order By: Relevance
“…Luengo et al [20] used 324 spectral and 54 prosody features combined with five voice quality features to test their proposed speech emotion recognition method on the Surrey audio-visual expressed emotion (SAVEE) database after applying the minimal redundancy maximal relevance (mRMR) to reduce less discriminating features. In [28], a method for recognizing emotions in an audio conversation based on speech and text was proposed and tested on the SemEval-2007 database using SVM. Liu et al [29] used the extreme learning machine (ELM) method for feature selection that was applied to 938 features based on a combination of spectral and prosodic features from Emo-DB for speech emotion recognition.…”
Section: Related Studiesmentioning
confidence: 99%
“…Luengo et al [20] used 324 spectral and 54 prosody features combined with five voice quality features to test their proposed speech emotion recognition method on the Surrey audio-visual expressed emotion (SAVEE) database after applying the minimal redundancy maximal relevance (mRMR) to reduce less discriminating features. In [28], a method for recognizing emotions in an audio conversation based on speech and text was proposed and tested on the SemEval-2007 database using SVM. Liu et al [29] used the extreme learning machine (ELM) method for feature selection that was applied to 938 features based on a combination of spectral and prosodic features from Emo-DB for speech emotion recognition.…”
Section: Related Studiesmentioning
confidence: 99%
“…Previously various researchers have worked towards improving the performance of depression recommendation. One of these works is mentioned in [1], where authors have used speech, textual data with support vector machines (SVM) to perform sentiment analysis. They use WordNet Affect and SentiWordNet for language processing and pitch, energy, formants, intensity & zero crossing rate (ZCR) features for sound processing to claim 81% accuracy, which can be further improved using deep learning algorithms like deep-nets and Q-learning.…”
Section: Literature Reviewmentioning
confidence: 99%
“…When using RBF the choice of two parameters is very important. Penalty C in case of conflict and constant σ in (2) and (3). If the values of C and σ are identified, then the classifier can predict emotions more accurately.…”
Section: Emotion Classificationmentioning
confidence: 99%
“…Recently, many research projects have been conducted in order to use this kind of data to achieve better results. Several hybrid approaches have been proposed for emotion classification based on text, speech, and image [2]. Changes in the voice are independent of the speaker and the language.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation