We present the participatory design process of a robotic tutor of assistive sign language for children with autism spectrum disorder (ASD). Robots have been used in autism therapy, and to teach sign language to neurotypical children. The application of teaching assistive sign language-the most common form of assistive and augmentative communication used by people with ASD-is novel. The robot's function is to prompt children to imitate the assistive signs that it performs. The robot was therefore co-designed to appeal to children with ASD, taking into account the characteristics of ASD during the design process: impaired language and communication, impaired social behavior, and narrow flexibility in daily activities. To accommodate these characteristics, a multidisciplinary team defined design guidelines specific to robots for children with ASD, which were followed in the participatory design process. With a pilot study where the robot prompted children to imitate nine assistive signs, we found support for the effectiveness of the design. The children successfully imitated the robot and kept their focus on it, as measured by their eye gaze. Children and their companions reported positive experiences with the robot, and companions evaluated it as potentially useful, suggesting that robotic devices could be used to teach assistive sign language to children with ASD.
Enabling diverse users to program robots for different applications is critical for robots to be widely adopted. Most of the new collaborative robot manipulators come with intuitive programming interfaces that allow novice users to compose robot programs and tune their parameters. However, parameters like motion speeds or exerted forces cannot be easily demonstrated and often require manual tuning, resulting in a tedious trial-and-error process. To address this problem, we formulate tuning of one-dimensional parameters as an Active Learning problem where the learner iteratively refines its estimate of the feasible range of parameter values, by selecting informative queries. By executing the parametrized actions, the learner gathers the user's feedback, in the form of directional answers ("higher, " "lower, " or "fine"), and integrates it in the estimate. We propose an Active Learning approach based on Expected Divergence Maximization for this setting and compare it against two baselines with synthetic data. We further compare the approaches on a real-robot dataset obtained from programs written with a simple Domain-Specific Language for a robot arm and manually tuned by expert users (N=8) to perform four manipulation tasks. We evaluate the effectiveness and usability of our interactive tuning approach against manual tuning with a user study where novice users (N=8) tuned parameters of a human-robot hand-over program. CCS CONCEPTS• Computing methodologies → Active learning settings; • Human-centered computing → User centered design; • Computer systems organization → External interfaces for robotics.
Design teams of social robots are often multidisciplinary, due to the broad knowledge from different scientific domains needed to develop such complex technology. However, tools to facilitate multidisciplinary collaboration are scarce. We introduce a framework for the participatory design of social robots and corresponding canvas tool for participatory design. The canvases can be applied in different parts of the design process to facilitate collaboration between experts of different fields, as well as to incorporate prospective users of the robot into the design process. We investigate the usability of the proposed canvases with two social robot design case studies: a robot that played games online with teenage users and a librarian robot that guided users at a public library. We observe through participants’ feedback that the canvases have the advantages of (1) providing structure, clarity, and a clear process to the design; (2) encouraging designers and users to share their viewpoints to progress toward a shared one; and (3) providing an educational and enjoyable design experience for the teams.
With the goal of having robots learn new skills after deployment, we propose an active learning framework for modelling user preferences about task execution. The proposed approach interactively gathers information by asking questions expressed in natural language. We study the validity and the learning performance of the proposed approach and two of its variants compared to a passive learning strategy. We further investigate the human-robotinteraction nature of the framework conducting a usability study with 18 subjects. The results show that active strategies are applicable for learning preferences in temporal tasks from non-expert users. Furthermore, the results provide insights in the interaction design of active learning robots. CCS CONCEPTS • Computing methodologies → Active learning settings; • Human-centered computing → Natural language interfaces; Interaction design; • Computer systems organization → External interfaces for robotics;
Transparency of robot behaviors increases efficiency and quality of interactions with humans. To increase transparency of robot policies, we propose a method for generating robust and focused explanations that express why a robot chose a particular action. The proposed method examines the policy based on the state space in which an action was chosen and describes it in natural language. The method can generate focused explanations by leaving out irrelevant state dimensions, and avoid explanations that are sensitive to small perturbations or have ambiguous natural language concepts. Furthermore, the method is agnostic to the policy representation and only requires the policy to be evaluated at different samples of the state space. We conducted a user study with 18 participants to investigate the usability of the proposed method compared to a comprehensive method that generates explanations using all dimensions. We observed how focused explanations helped the subjects more reliably detect the irrelevant dimensions of the explained system and how preferences regarding explanation styles and their expected characteristics greatly differ among the participants.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
334 Leonard St
Brooklyn, NY 11211
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.