Children's oral language skills in preschool can predict their success in reading, writing, and academics in later schooling. Helping children improve their language skills early on could lead to more children succeeding later. As such, we examined the potential of a sociable robotic learning/teaching companion to support children's early language development. In a microgenetic study, 17 children played a storytelling game with the robot eight times over a two-month period. We evaluated whether a robot that "leveled" its stories to match the child's current abilities would lead to greater learning and language improvements than a robot that was not matched. All children learned new words, created stories, and enjoyed playing. Children who played with a matched robot used more words, and more diverse words, in their stories than unmatched children. Understanding the interplay between the robot's and the children's language will inform future work on robot companions that support children's education through play.
Though substantial research has been dedicated towards using technology to improve education, no current methods are as effective as one-on-one tutoring. A critical, though relatively understudied, aspect of effective tutoring is modulating the student's affective state throughout the tutoring session in order to maximize long-term learning gains. We developed an integrated experimental paradigm in which children play a second-language learning game on a tablet, in collaboration with a fully autonomous social robotic learning companion. As part of the system, we measured children's valence and engagement via an automatic facial expression analysis system. These signals were combined into a reward signal that fed into the robot's affective reinforcement learning algorithm. Over several sessions, the robot played the game and personalized its motivational strategies (using verbal and non-verbal actions) to each student. We evaluated this system with 34 children in preschool classrooms for a duration of two months. We saw that (1) children learned new words from the repeated tutoring sessions, (2) the affective policy personalized to students over the duration of the study, and (3) students who interacted with a robot that personalized its affective feedback strategy showed a significant increase in valence, as compared to students who interacted with a non-personalizing robot. This integrated system of tablet-based educational content, affective sensing, affective policy learning, and an autonomous social robot holds great promise for a more comprehensive approach to personalized tutoring.
Abstract-We deployed an autonomous social robotic learning companion in three preschool classrooms at an American public school for two months. Before and after this deployment, we asked the teachers and teaching assistants who worked in the classrooms about their views on the use of social robots in preschool education. We found that teachers' expectations about the experience of having a robot in their classrooms often did not match up with their actual experience. These teachers generally expected the robot to be disruptive, but found that it was not, and furthermore, had numerous positive ideas about the robot's potential as a new educational tool for their classrooms. Based on these interviews, we provide a summary of lessons we learned about running child-robot interaction studies in preschools. We share some advice for future researchers who may wish to engage teachers and schools in the course of their own human-robot interaction work. Understanding the teachers, the classroom environment, and the constraints involved is especially important for microgenetic and longitudinal studies, which require more of the school's time-as well as more of the researchers' time-and is a greater opportunity investment for everyone involved.
Abstract-Tega is a new expressive "squash and stretch", Androidbased social robot platform, designed to enable long-term interactions with children. I. A NEW SOCIAL ROBOT PLATFORMTega is the newest social robot platform designed and built by a diverse team of engineers, software developers, and artists at the Personal Robots Group at the MIT Media Lab. This robot, with its furry, brightly colored appearance, was developed specifically to enable long-term interactions with children.Tega comes from a line of Android-based robots that leverage smartphones to drive computation and display an animated face [1]- [3]. The phone runs software for behavior control, motor control, and sensor processing. The phone's abilities are augmented with an external high-definition camera mounted in the robot's forehead and a set of on-board speakers.Tega's motion was inspired by "squash and stretch" principles of animation [4], creating natural and organic motion while keeping the actuator count low. Tega has five degrees of freedom: head up/down, waist-tilt left/right, waist-lean forward/back, full-body up/down, and full-body left/right. These joints are combinatorial and allow the robot to express behaviors consistently, rapidly, and reliably.The robot can run autonomously or can be remote-operated by a person through a teleoperation interface. The robot can operate on battery power for up to six hours before needing to be recharged, which allows for easier testing in the field. To that end, Tega was the robot platform used in a recent two-month study on second language learning conducted in three public school classrooms [5], [6].A variety of facial expressions and body motions can be triggered on the robot, such as laughter, excitement, and frustration. Additional animations can be developed on a computer model of the robot and exported via a software pipeline to a set of motor commands that can be executed on the physical robot, thus enabling rapid development of new expressive behaviors. Speech can be played back from prerecorded audio tracks, generated on the fly with a text-to-speech system, or streamed to the robot via a real-time voice streaming and pitch-shifting interface.This video showcases the Tega robot's design and implementation. It is a first look at the robot's capabilities as a research platform. The video highlights the robot's motion, expressive capabilities, and its use in ongoing studies of child-robot interaction.
Researchers in the cognitive and affective sciences investigate how thoughts and feelings are reflected in the bodily response systems including peripheral physiology, facial features, and body movements. One specific question along this line of research is how cognition and affect are manifested in the dynamics of general body movements. Progress in this area can be accelerated by inexpensive, non-intrusive, portable, scalable, and easy to calibrate movement tracking systems. Towards this end, this paper presents and validates Motion Tracker, a simple yet effective software program that uses established computer vision techniques to estimate the amount a person moves from a video of the person engaged in a task (available for download from http://jakory.com/motion-tracker/). The system works with any commercially available camera and with existing videos, thereby affording inexpensive, non-intrusive, and potentially portable and scalable estimation of body movement. Strong between-subject correlations were obtained between Motion Tracker’s estimates of movement and body movements recorded from the seat (r =.720) and back (r = .695 for participants with higher back movement) of a chair affixed with pressure-sensors while completing a 32-minute computerized task (Study 1). Within-subject cross-correlations were also strong for both the seat (r =.606) and back (r = .507). In Study 2, between-subject correlations between Motion Tracker’s movement estimates and movements recorded from an accelerometer worn on the wrist were also strong (rs = .801, .679, and .681) while people performed three brief actions (e.g., waving). Finally, in Study 3 the within-subject cross-correlation was high (r = .855) when Motion Tracker’s estimates were correlated with the movement of a person’s head as tracked with a Kinect while the person was seated at a desk (Study 3). Best-practice recommendations, limitations, and planned extensions of the system are discussed.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.