2017
DOI: 10.1016/j.bica.2016.12.002
|View full text |Cite
|
Sign up to set email alerts
|

Speech emotion recognition based on a modified brain emotional learning model

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
14
0

Year Published

2018
2018
2024
2024

Publication Types

Select...
5
3
1

Relationship

0
9

Authors

Journals

citations
Cited by 41 publications
(16 citation statements)
references
References 15 publications
0
14
0
Order By: Relevance
“…A positive emotion of a user of AI could then serve as a positive reward, and vice versa for a user's negative emotion for an AI relying on reinforcement learning. First attempts in this direction exist (Motamed, Setayeshi, & Rabiee, 2017), but more is expected to come.…”
Section: Reinforced Learningmentioning
confidence: 99%
“…A positive emotion of a user of AI could then serve as a positive reward, and vice versa for a user's negative emotion for an AI relying on reinforcement learning. First attempts in this direction exist (Motamed, Setayeshi, & Rabiee, 2017), but more is expected to come.…”
Section: Reinforced Learningmentioning
confidence: 99%
“…A 60.89% test set unweighted average recall (UAR) was achieved for frame-based emotion recognition using the CNN system. Motamed et al (2017) introduced an optimized brain emotional learning model (BEL) that merged an adaptive neuro-fuzzy inference system (ANFIS) and multilayer perceptron (MLP) model for speech emotion recognition. The ANFIS was intended to model the human amygdala and orbitofrontal cortex in order to make rules that were passed to the MLP network.…”
Section: Emotion Recognition Using Speechmentioning
confidence: 99%
“…The review of speech emotion recognition showed that there are numerous corpora available for model research, validation, and testing (Haq et al, 2008;Grimm et al, 2008;Canavan et al, 1997;Durston et al, 2001;Engbert and Hansen, 2007;Burkhardt et al, 2005;Hansen and Bou-Gazale, 1997;Martin et al, 2006;Schuller et al, 2007;Steininger et al, 2002). Speech features used for input to emotion recognition models include MFCC (Motamed et al, 2017;Wu et al, 2011), prosody (Dai et al, 2015;Wu et al, 2011;Fernandez and Picard, 2011), and voice-quality (Fernandez and Picard, 2011;Dai et al, 2015) features. The use of a pre-compiled, publically available speech feature set was investigated as part of the work undertaken by C. K.…”
Section: Emotion Recognition Using Speechmentioning
confidence: 99%
“…Firstly, there is an uncertainty in the definition of speech emotion model. There are two main types of speech emotion model definition [3]. One is the speech emotion model of the discrete categories; the other is the speech emotion model of the continuous dimensions.…”
Section: Introductionmentioning
confidence: 99%