Successful social robot services depend on how robots can interact with users. The effective service can be obtained through smooth, engaged, and humanoid interactions in which robots react properly to a user’s affective state. This article proposes a novel Automatic Cognitive Empathy Model, ACEM, for humanoid robots to achieve longer and more engaged human-robot interactions (HRI) by considering humans’ emotions and replying to them appropriately. The proposed model continuously detects the affective states of a user based on facial expressions and generates desired, either parallel or reactive, empathic behaviors that are already adapted to the user’s personality. Users’ affective states are detected using a stacked autoencoder network that is trained and tested on the RAVDESS dataset.
The overall proposed empathic model is verified throughout an experiment, where different emotions are triggered in participants and then empathic behaviors are applied based on proposed hypothesis. The results confirm the effectiveness of the proposed model in terms of related social and friendship concepts that participants perceived during interaction with the robot.
The number of collaborative robots that perform different tasks in close proximity to humans is increasing. Previous studies showed that enabling non-expert users to program a cobot reduces the cost of robot maintenance and reprogramming. Since this approach is based on an interaction between the cobot and human partners, in this study, we investigate whether making this interaction more transparent can improve the interaction and lead to better performance for non-expert users. To evaluate the proposed methodology, an experiment with 67 participants is conducted. The obtained results show that providing explanation leads to higher performance, in terms of efficiency and efficacy, i.e., the number of times the task is completed without teaching a wrong instruction to the cobot is two times higher when explanations are provided. In addition, providing explanation also increases users’ satisfaction and trust in working with the cobot.
Considering human's emotion in different applications and systems has received substantial attention over the last three decades. The traditional approach for emotion detection is to first extract different features and then apply a classifier, like SVM, to find the true class. However, recently proposed Deep Learning based models outperform traditional machine learning approaches without requirement of a separate feature extraction phase. This paper proposes a novel deep learning based facial emotion detection model, which uses facial muscles activities as raw input to recognize the type of the expressed emotion in the real time. To this end, we first use OpenFace to extract the activation values of the facial muscles, which are then presented to a Stacked Auto Encoder (SAE) as feature set. Afterward, the SAE returns the best combination of muscles in describing a particular emotion, these extracted features at the end are applied to a Softmax layer in order to fulfill multi classification task. The proposed model has been applied to the CK+, MMI and RADVESS datasets and achieved respectively average accuracies of 95.63%, 95.58%, and 84.91% for emotion type detection in six classes, which outperforms state-of-the-art algorithms.
Maintaining engagement is challenging in human–human interaction. When disengagements happen, people try to adapt their behavior with an expectation that engagement will be regained. In human–robot interaction, although socially interactive robots are engaging, people can easily drop engagement while interacting with robots. This paper proposes a multi-layer re-engagement system that applies different strategies through human-like verbal and non-verbal behaviors to regain user engagement, taking into account the user’s attention level and affective states. We conducted a usability test in a robot storytelling scenario to demonstrate technical operation of the system as well as to investigate how people react when interacting with a robot with re-engagement ability. Our usability test results reveal that the system has the potential to maintain a user’s engagement. Our selected users gave positive comments, through open-ended questions, to the robot with this ability. They also rated the robot with the re-engagement ability higher on several dimensions, i.e., animacy, likability, and perceived intelligence.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.