Geometry and handwriting rely heavily on the visual representation of basic shapes. It can become challenging for students with visual impairments to perceive these shapes and understand complex spatial constructs. For instance, knowing how to draw is highly dependent on spatial and temporal components, which are often inaccessible to children with visual impairments. Hand-held robots, such as the Cellulo robots, open unique opportunities to teach drawing and writing through haptic feedback. In this paper, we investigate how these tangible robots could support inclusive, collaborative learning activities, particularly for children with visual impairments. We conducted a user study with 20 pupils with and without visual impairments, where they engaged in multiple drawing activities with tangible robots. We contribute novel insights on the design of children-robot interaction, learning shapes and letters, children engagement, and responses in a collaborative scenario that address the challenges of inclusive learning.
Social agents should exhibit socially adequate behavior to fit the context they meet. Fitting the context is particular relevant for interactive agents that interact and are being observed by people. Hence, the perceptions of people of such social capabilities are an important concern. Exhibiting socially adequate behavior can more easily be identifiable when in the presence of other social actors. However, even alone, one's ability to adjust to the context might be socially motivated and interpreted as such. Similarly, intelligent agents may be identified as social beings when acting alone. Moreover, social context is triggered in different ways. In this study, we explore if adaptation to the physical surroundings (e.g., the agent's location) is enough to shape the perceptions of people observing the agent. We contribute to the study of situated cognition's role in interpreting an autonomous agent's behavior. In particular, we explore the impact of behavior changes grounded on the location as a contextual cue on the motivation ascribed by an observer to the agent's behavior.We implemented a virtual scenario with multiple contexts and one simple character employing a computational model called Cognitive Social Frames that supports behavior change to context. We conducted a user study (n=92) to assess if an observer's perceptions of intention and motivation are affected by an agent's capability to adapt to different contexts. Our findings suggest that (a) despite no other agents being present, participants ascribe social motivations to the agent's adaptive behavior, (b) such attributions are independent of visual cues, and (c) even without any pre-established norms, agents that consistently adjust their behavior to the physical context are perceived as more social.
CCS CONCEPTS• Human-centered computing → Empirical studies in HCI ; User studies; • Computing methodologies → Agent / discrete models.
Social robots have been shown to be promising tools for delivering therapeutic tasks for children with Autism Spectrum Disorder (ASD). However, their efficacy is currently limited by a lack of flexibility of the robot’s social behavior to successfully meet therapeutic and interaction goals. Robot-assisted interventions are often based on structured tasks where the robot sequentially guides the child towards the task goal. Motivated by a need for personalization to accommodate a diverse set of children profiles, this paper investigates the effect of different robot action sequences in structured socially interactive tasks targeting attention skills in children with different ASD profiles. Based on an autism diagnostic tool, we devised a robotic prompting scheme on a NAO humanoid robot, aimed at eliciting goal behaviors from the child, and integrated it in a novel interactive storytelling scenario involving screens. We programmed the robot to operate in three different modes: diagnostic-inspired (Assess), personalized therapy-inspired (Therapy), and random (Explore). Our exploratory study with 11 young children with ASD highlights the usefulness and limitations of each mode according to different possible interaction goals, and paves the way towards more complex methods for balancing short-term and long-term goals in personalized robot-assisted therapy.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.