2012
DOI: 10.1155/2012/461247
|View full text |Cite
|
Sign up to set email alerts
|

Affect Detection from Text-Based Virtual Improvisation and Emotional Gesture Recognition

Abstract: We have developed an intelligent agent to engage with users in virtual drama improvisation previously. The intelligent agent was able to perform sentence-level affect detection from user inputs with strong emotional indicators. However, we noticed that many inputs with weak or no affect indicators also contain emotional implication but were regarded as neutral expressions by the previous interpretation. In this paper, we employ latent semantic analysis to perform topic theme detection and identify target audie… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
6
0

Year Published

2015
2015
2023
2023

Publication Types

Select...
4
2

Relationship

0
6

Authors

Journals

citations
Cited by 6 publications
(6 citation statements)
references
References 12 publications
0
6
0
Order By: Relevance
“…For example, according to the position of hands, one can decide if an individual is straightforward (turning the hands inside towards the questioner) or deceptive (concealing the hands behind the back) [68]. Zhang and Yap (2012) [70] proposed a system for recognizing the target emotional gestures by using the Kinect sensor. Positions of the 7 joints (for example: head, right hand, left hand, right elbow, left elbow, left hip, and hip appropriate) were distinguished by utilizing OpenCV ("Open Source Computer Vision") library and distances between these extracted points were determined.…”
Section: Body Languagementioning
confidence: 99%
“…For example, according to the position of hands, one can decide if an individual is straightforward (turning the hands inside towards the questioner) or deceptive (concealing the hands behind the back) [68]. Zhang and Yap (2012) [70] proposed a system for recognizing the target emotional gestures by using the Kinect sensor. Positions of the 7 joints (for example: head, right hand, left hand, right elbow, left elbow, left hip, and hip appropriate) were distinguished by utilizing OpenCV ("Open Source Computer Vision") library and distances between these extracted points were determined.…”
Section: Body Languagementioning
confidence: 99%
“…Another example in this direction comes from Zhang and Yap (2012) who studied automatic affect detection based on participants’ verbal (written) and non-verbal behavior during a virtual role-play. Affect detection in verbal information was performed through latent semantic analysis, which is an algorithm that automatically learns semantic information about words through their common use in natural language ( Landauer and Dumais, 1997 ).…”
Section: Automatic Extraction Of Participant Interaction Behavior In mentioning
confidence: 99%
“…Moreover, we are faced with a very fast developing research domain because of the frequent technical improvements and increased availability of relatively cheap virtual reality devices which makes an update since 2009 timely. In particular, in the last years more effort has been put into integrating IVET with other technologies, such as eye-tracking ( Wieser et al, 2010 ), movement extraction devices ( Zhang and Yap, 2012 ; Batrinca et al, 2013 ), and EEG ( Kober et al, 2012 ). Moreover, recent studies have started to address the issue of making the conversation between participants and virtual humans smoother ( Malatesta et al, 2009 ; Zhang and Yap, 2012 ).…”
Section: Conclusion and Future Challengesmentioning
confidence: 99%
See 1 more Smart Citation
“…Therefore, this paper uses the movement of tracked points on the face, head, hand, and body into behavioral pattern-based features or behavioral rules to represent specific emotions. The motivation to use features from behavioral patterns for emotion estimation was drawn from behavioral science research on emotional gesture recognition [18,20,21,34,35] and adaptive rule-based facial expression recognitions [19]. The studies have demonstrated unimodal affect recognition by using limited set of gesture-based rules and rules extracted from various facial expression profiles.…”
Section: Introductionmentioning
confidence: 99%