2018
DOI: 10.48550/arxiv.1805.06652
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Affective computing using speech and eye gaze: a review and bimodal system proposal for continuous affect prediction

Jonny O'Dwyer,
Niall Murray,
Ronan Flynn

Abstract: Speech has been a widely used modality in the field of affective computing. Recently however, there has been a growing interest in the use of multi-modal affective computing systems. These multi-modal systems incorporate both verbal and non-verbal features for affective computing tasks. Such multi-modal affective computing systems are advantageous for emotion assessment of individuals in audio-video communication environments such as teleconferencing, healthcare, and education. From a review of the literature,… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
1
0

Year Published

2021
2021
2022
2022

Publication Types

Select...
1
1

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(2 citation statements)
references
References 43 publications
(136 reference statements)
0
1
0
Order By: Relevance
“…We followed the procedure that is commonly adopted in the cognitive assessment scenario (see Table 2), by including all the necessary tools for assessing the factors of interest. Given the advances in affective computing [49] and automatic personality detection [50] fields, we would like to adopt these solutions to improve the perception capabilities of the robotic platform. It will allow the robot to automatically infer the individual's traits.…”
Section: Discussionmentioning
confidence: 99%
“…We followed the procedure that is commonly adopted in the cognitive assessment scenario (see Table 2), by including all the necessary tools for assessing the factors of interest. Given the advances in affective computing [49] and automatic personality detection [50] fields, we would like to adopt these solutions to improve the perception capabilities of the robotic platform. It will allow the robot to automatically infer the individual's traits.…”
Section: Discussionmentioning
confidence: 99%
“…Their results further showed that gaze features were even more helpful than speech. In a study by O'Dwyer et al [49], the addition of eye gaze features to speech yielded an improvement of 19.5% for valence prediction and 3.5% for arousal prediction. An opposing pattern was found by O'Dwyer et al [50] who employed pupillometry and gaze features for valence and arousal estimation on the RECOLA dataset [55].…”
Section: Gaze-based Emotion Recognitionmentioning
confidence: 98%