Robots are increasingly being studied for use in education. It is expected that robots will have the potential to facilitate children's learning and function autonomously within real classrooms in the near future. Previous research has raised the importance of designing acceptable robots for different practices. In parallel, scholars have raised ethical concerns surrounding children interacting with robots. Drawing on a Responsible Research and Innovation perspective, our goal is to move away from research concerned with designing features that will render robots more socially acceptable by end users toward a reflective dialogue whose goal is to consider the key ethical issues and long-term consequences of implementing classroom robots for teachers and children in primary education. This paper presents the results from several focus groups conducted with teachers in three European countries. Through a thematic analysis, we provide a theoretical account of teachers' perspectives on classroom robots pertaining to privacy, robot role, effects on children and responsibility. Implications for the field of educational robotics are discussed.
KeywordsEducational robots, Social implications, Ethics, Teachers' perspectives, Thematic analysis, Focus group
AcknowledgmentsWe would first of all like to thank all the teachers and students who took part in the studies. We would also like to extend our gratitude to teacher education students Rebecka Olofsson and Trixie Assarsson for their excellent video editing. We thank Tiago Ribeiro, Eugenio Di Tullio, Etienne Roesch and Daniel Gooch for facilitating some of the focus groups. We would also like to thank master student Thomas Rider for his initial transcription services and ideas. We also thank the MUL group at the University of Gothenburg for their valuable feedback on an earlier version of this paper. This work was partially supported by the European Commission (EC) and was funded by the EU FP7 ICT-317923 project EMOTE (www.emote-project.eu). P. Alves-Oliveira acknowledges a FCT grant ref. SFRH/BD/110223/2015. The authors are solely responsible for the content of this publication. It does not represent the opinion of the EC, and the EC is not responsible for any use that might be made of data appearing therein.We recommend you cite the published version. The final publication is available at Springer via http://dx
This article surveys the area of computational empathy, analysing different ways by which artificial agents can simulate and trigger empathy in their interactions with humans. Empathic agents can be seen as agents that have the capacity to place themselves into the position of a user’s or another agent’s emotional situation and respond appropriately. We also survey artificial agents that, by their design and behaviour, can lead users to respond emotionally as if they were experiencing the agent’s situation. In the course of this survey, we present the research conducted to date on empathic agents in light of the principles and mechanisms of empathy found in humans. We end by discussing some of the main challenges that this exciting area will be facing in the future.
The idea of robotic companions capable of establishing meaningful relationships with humans remains far from being accomplished. To achieve this, robots must interact with people in natural ways, employing social mechanisms that people use while interacting with each other. One such mechanism is empathy, often seen as the basis of social cooperation and prosocial behaviour. We argue that artificial companions capable of behaving in an empathic manner, which involves the capacity to recognise another's affect and respond appropriately, are more successful at establishing and maintaining a positive relationship with users. This paper presents a study where an autonomous robot with empathic capabilities acts as a social companion to two players in a chess game. The robot reacts to the moves played on the chessboard by displaying several facial expressions and verbal utterances, showing empathic behaviours towards one player and behaving neutrally towards the other. Quantitative and qualitative results of 31 participants indicate that users towards whom the robot behaved empathically perceived the robot as friendlier, which supports our hypothesis that empathy plays a key role in human-robot interaction.
The design of an affect recognition system for socially perceptive robots relies on representative data: human-robot interaction in naturalistic settings requires an affect recognition system to be trained and validated with contextualised affective expressions, that is, expressions that emerge in the same interaction scenario of the target application. In this paper we propose an initial computational model to automatically analyse human postures and body motion to detect engagement of children playing chess with an iCat robot that acts as a game companion. Our approach is based on vision-based automatic extraction of expressive postural features from videos capturing the behaviour of the children from a lateral view. An initial evaluation, conducted by training several recognition models with contextualised affective postural expressions, suggests that patterns of postural behaviour can be used to accurately predict the engagement of the children with the robot, thus making our approach suitable for integration into an affect recognition system for a game companion in a real world scenario.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.