2016
DOI: 10.1177/1059712316664017
|View full text |Cite
|
Sign up to set email alerts
|

Developing crossmodal expression recognition based on a deep neural model

Abstract: A robot capable of understanding emotion expressions can increase its own capability of solving problems by using emotion expressions as part of its own decision-making, in a similar way to humans. Evidence shows that the perception of human interaction starts with an innate perception mechanism, where the interaction between different entities is perceived and categorized into two very clear directions: positive or negative. While the person is developing during childhood, the perception evolves and is shaped… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

1
57
1
1

Year Published

2017
2017
2022
2022

Publication Types

Select...
7
1

Relationship

0
8

Authors

Journals

citations
Cited by 63 publications
(60 citation statements)
references
References 51 publications
1
57
1
1
Order By: Relevance
“…[5]), including with reference to decoding tactile (gestural) inputs [12]. Furthermore, Hertenstein et al [25] found a lower mean percentage correct classification/decoding for the Rejection (64%) and Attachment (59%) emotions identified above, than we did in our study.…”
Section: Decoding Emotionscontrasting
confidence: 55%
See 1 more Smart Citation
“…[5]), including with reference to decoding tactile (gestural) inputs [12]. Furthermore, Hertenstein et al [25] found a lower mean percentage correct classification/decoding for the Rejection (64%) and Attachment (59%) emotions identified above, than we did in our study.…”
Section: Decoding Emotionscontrasting
confidence: 55%
“…The reason for this was that the instructions vocalized by the experimenters did not request time-limited responding from participants regarding the conveyed emotions. Time-limitation was considered 5 We winsorized 3 values (outliers) for each of the 16 conditions and additionally one extra of the gender-emotion conditions with highest variance female-sadness, male-sadness, female-love, male-love, i.e. 52 values out of 512 data points in total.…”
Section: Durationmentioning
confidence: 99%
“…Studies have typically shown that multiple dimensions of expression facilitate classification analyses [39,44]. We have demonstrated the potential for tactile interaction to provide much information regarding emotional conveyance; nevertheless, emotional interaction in naturalistic contexts is almost always of a context-specific (e.g., dependent on the type of task) and multi-sensory nature.…”
mentioning
confidence: 99%
“…This is a key selling point of adopting textile wearables for use in affective-based human-robot tactile interaction. Studies indicate that emotion decoding is more accurate and more typical, when multi-modal sensors that pick up specific emotional and gestural information are added ( [39], also see [44]). Such multi-modal encoding and decoding allows for contextual nuance in the affective interaction.…”
mentioning
confidence: 99%
“…Compared with RNN, CNN is more suitable for computer vision applications; hence, its derivative C3D [107], which uses 3D convolutional kernels with shared weights along the time axis instead of the traditional 2D kernels, has been widely used for dynamic-based FER (e.g., [83], [108], [189], [197], [198]) to capture the spatio-temporal features. Based on C3D, many derived structures have been designed for FER.…”
Section: Rnn and C3dmentioning
confidence: 99%