2019 8th International Conference on Affective Computing and Intelligent Interaction Workshops and Demos (ACIIW) 2019
DOI: 10.1109/aciiw.2019.8925291
|View full text |Cite
|
Sign up to set email alerts
|

A multi-layer artificial intelligence and sensing based affective conversational embodied agent

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
9
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
5
1
1

Relationship

0
7

Authors

Journals

citations
Cited by 7 publications
(10 citation statements)
references
References 3 publications
0
9
0
Order By: Relevance
“…Our perception module did not assign a positive or negative emotional state and it did not involve training an additional model from the observed action units; rather, it detected and provided as output the appearance of the abovementioned action units. The detected action units signified the person was exhibiting one of the selected facial expressions, which is an approach often used in studies where the perceptual component in the architecture is focused only on facial expressions, and not on more detailed emotional state (Yalcin and DiPaola, 2018 ; DiPaola and Yalçin, 2019 ).…”
Section: Our Frameworkmentioning
confidence: 99%
“…Our perception module did not assign a positive or negative emotional state and it did not involve training an additional model from the observed action units; rather, it detected and provided as output the appearance of the abovementioned action units. The detected action units signified the person was exhibiting one of the selected facial expressions, which is an approach often used in studies where the perceptual component in the architecture is focused only on facial expressions, and not on more detailed emotional state (Yalcin and DiPaola, 2018 ; DiPaola and Yalçin, 2019 ).…”
Section: Our Frameworkmentioning
confidence: 99%
“…The virtual moderator has a positive influence on users’ behaviours and cognition, and emotions through the Proteus and social identification effects enabled by VR technology [ 7 , 53 , 54 ]. From an AI perspective, deep learning models enable empathic conversations by introducing features such as emotion recognition, eye gaze, and voice stress [ 73 ], which are crucial to building intimacy between digital humans and users [ 39 ]. Therefore, AI provides a step change towards producing enhanced methods of communication that are as realistic as possible within a virtual environment.…”
Section: Discussionmentioning
confidence: 99%
“…Gesture-based communication has been highly effective in inducing effects of co-presence [ 77 ]. When combined with other communication enhancers including emotion recognition, eye contact [ 73 ], a positive immersive experience, is achieved, which leads to creative discussions. Intelligent HCI and deep learning approaches play a vital role in enabling multiple modes of communication [ 75 ].…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…Multiagent and synergetic constellation awareness overcome these limitations. Moreover, embodied intelligence [286] and deep, general, and evolutionary learning can be applied to multiagent systems and constellations for realistic multimodal interaction. They contribute to the intelligent evolution of situational awareness systems…”
Section: Discussionmentioning
confidence: 99%