2015
DOI: 10.14569/ijarai.2015.040204
|View full text |Cite
|
Sign up to set email alerts
|

Speech emotion recognition in emotional feedback for Human-Robot Interaction

Abstract: Abstract-For robots to plan their actions autonomously and interact with people, recognizing human emotions is crucial. For most humans nonverbal cues such as pitch, loudness, spectrum, speech rate are efficient carriers of emotions. The features of the sound of a spoken voice probably contains crucial information on the emotional state of the speaker, within this framework, a machine might use such properties of sound to recognize emotions. This work evaluated six different kinds of classifiers to predict six… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
2
2
1

Year Published

2019
2019
2024
2024

Publication Types

Select...
5
2

Relationship

0
7

Authors

Journals

citations
Cited by 24 publications
(8 citation statements)
references
References 54 publications
0
2
2
1
Order By: Relevance
“…The data in Table 4 show that the proposed classification method surpasses most of the known models of emotion detection based on the speech signal. At the same time, in (Razuri et al, 2015) the share of correct answers is 96.97%, which is 3.78% higher than the results of this study. However, it should be noted that the classification according to the base (Martin et al, 2006) in the work (Razuri et al, 2015) was made only for 6 types of emotions without determining the neutral state.…”
Section: Proposed Classification Methods Comparison With Other Studies In the Fieldcontrasting
confidence: 83%
See 3 more Smart Citations
“…The data in Table 4 show that the proposed classification method surpasses most of the known models of emotion detection based on the speech signal. At the same time, in (Razuri et al, 2015) the share of correct answers is 96.97%, which is 3.78% higher than the results of this study. However, it should be noted that the classification according to the base (Martin et al, 2006) in the work (Razuri et al, 2015) was made only for 6 types of emotions without determining the neutral state.…”
Section: Proposed Classification Methods Comparison With Other Studies In the Fieldcontrasting
confidence: 83%
“…At the same time, in (Razuri et al, 2015) the share of correct answers is 96.97%, which is 3.78% higher than the results of this study. However, it should be noted that the classification according to the base (Martin et al, 2006) in the work (Razuri et al, 2015) was made only for 6 types of emotions without determining the neutral state. Also, in (Razuri et al, 2015), 264 samples of audio signals extracted from video recordings were used for the study, with one statement for each emotion.…”
Section: Proposed Classification Methods Comparison With Other Studies In the Fieldcontrasting
confidence: 83%
See 2 more Smart Citations
“…Такие модели получили широкое распространение в интерфейсах человек-компьютер, в целом, и голосовых пользовательских интерфейсов (Alexa, Cortana, Siri, Алиса), в частности. Кроме того, модели распознавания эмоций получили широкое распространение в следующих областях: в приложениях речевого анализа в области медицины [1], безопасности [2], робототехники [3], автоматизированных систем [4]. Тем не менее, на текущем этапе своего развития, модели автоматического распознавания эмоций в речи не могут обеспечить должную производительность на реальных данных [5], что определяет необходимость разработки новых подходов к распознаванию эмоций человека в речи.…”
Section: Introductionunclassified