For robots to interact effectively with human users they must be capable of coordinated, timely behavior in response to social context. The Adaptive Strategies for Sustainable Long-Term Social Interaction (ALIZ-E) project focuses on the design of long-term, adaptive social interaction between robots and child users in real-world settings. In this paper, we report on the iterative approach taken to scientific and technical developments toward this goal: advancing individual technical competencies and integrating them to form an autonomous robotic system for evaluation "in the wild." The first evaluation iterations have shown the potential of this methodology in terms of adaptation of the robot to the interactant and the resulting influences on engagement. This sets the foundation for an ongoing research program that seeks to develop technologies for social robot companions.
Social robots have the potential to provide support in a number of practical domains, such as learning and behaviour change. This potential is particularly relevant for children, who have proven receptive to interactions with social robots. To reach learning and therapeutic goals, a number of issues need to be investigated, notably the design of an effective child-robot interaction (cHRI) to ensure the child remains engaged in the relationship and that educational goals are met. Typically, current cHRI research experiments focus on a single type of interaction activity (e.g. a game). However, these can suffer from a lack of adaptation to the child, or from an increasingly repetitive nature of the activity and interaction. In this paper, we motivate and propose a practicable solution to this issue: an adaptive robot able to switch between multiple activities within single interactions. We describe a system that embodies this idea, and present a case study in which diabetic children collaboratively learn with the robot about various aspects of managing their condition. We demonstrate the ability of our system to induce a varied interaction and show the potential of this approach both as an educational tool and as a research method for long-term cHRI.
Estimating a person's affective state from facial information is an essential capability for social interaction. Automatizing such a capability has therefore increasingly driven multidisciplinary research for the past decades. At the heart of this issue are very challenging signal processing and artificial intelligence problems driven by the inherent complexity of human affect. We therefore propose a principled framework for designing automated systems capable of continuously estimating the human affective state from an incoming stream of images. First, we model human affect as a dynamical system and define the affective state in terms of valence, arousal and their higher-order derivatives. We then pose the affective state estimation problem as a Bayesian filtering problem and provide a solution based on Kalman filtering (KF) for probabilistic reasoning over time, combined with multiple instance sparse Gaussian processes (MI-SGP) for inferring affect-related measurements from image sequences. We quantitatively and qualitatively evaluate our proposed framework on the AVEC 2012 and AVEC 2014 benchmark datasets and obtain state-of-the-art results using the baseline features as input to our MI-SGP-KF model. We therefore believe that leveraging the Bayesian filtering paradigm can pave the way for further enhancing the design of automated systems for affective state estimation.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.