This paper describes an extended (6-session) interaction between an ethnically and geographically diverse group of 26 first-grade children and the DragonBot robot in the context of learning about healthy food choices. We find that children demonstrate a high level of enjoyment when interacting with the robot, and a statistically significant increase in engagement with the system over the duration of the interaction. We also find evidence of relationship-building between the child and robot, and encouraging trends towards child learning. These results are promising for the use of socially assistive robotic technologies for long-term one-on-one educational interventions for younger children.
Object handover is a basic, but essential capability for robots interacting with humans in many applications, e.g., caring for the elderly and assisting workers in manufacturing workshops. It appears deceptively simple, as humans perform object handover almost flawlessly. The success of humans, however, belies the complexity of object handover as collaborative physical interaction between two agents with limited communication. This paper presents a learning algorithm for dynamic object handover, for example, when a robot hands over water bottles to marathon runners passing by the water station. We formulate the problem as contextual policy search, in which the robot learns object handover by interacting with the human. A key challenge here is to learn the latent reward of the handover task under noisy human feedback. Preliminary experiments show that the robot learns to hand over a water bottle naturally and that it adapts to the dynamics of human motion. One challenge for the future is to combine the model-free learning algorithm with a model-based planning approach and enable the robot to adapt over human preferences and object characteristics, such as shape, weight, and surface texture.
We present CoverNet, a new method for multimodal, probabilistic trajectory prediction in urban driving scenarios. Previous work has employed a variety of methods, including multimodal regression, occupancy maps, and 1-step stochastic policies. We instead frame the trajectory prediction problem as classification over a diverse set of trajectories. The size of this set remains manageable, due to the fact that there are a limited number of distinct actions that can be taken over a reasonable prediction horizon. We structure the trajectory set to a) ensure a desired level of coverage of the state space, and b) eliminate physically impossible trajectories. By dynamically generating trajectory sets based on the agent's current state, we can further improve the efficiency of our method. We demonstrate our approach on public, real-world self-driving datasets, and show that it outperforms state-of-the-art methods.
One of the major challenges that autonomous vehicles (AVs) face in an urban setting is communicating with other road users such as pedestrians. In this work, we investigated with what expressive behaviors we can endow AVs such that pedestrians readily recognize the underlying intent of the vehicles' movements. The purpose of our study was to test the impact of expressive stopping behaviors on pedestrians' decision to cross a road. We utilized a virtual reality (VR) environment in which participants would have to cross a street in the presence of an oncoming vehicle that may or may not stop. Next, we crafted several expressive AV behaviors conveying its intention to stop for the pedestrian. Then, for each expressive design we recorded how quickly a pedestrian determined that it was safe to cross the street. We also administered repeated surveys of their subjective experiences. Our findings suggest that expressive behaviors such as easing into a full stop or stopping farther away can help pedestrians make quicker decisions to cross the road. Additionally, stopping farther away from the pedestrian also resulted in higher subjective experience for sense of safety, confidence, and intention understanding. We propose further investigation into expressive behaviors such as easing into a stop and stopping farther away to convey yielding intentions to pedestrians in future work. As a contribution to the field, all VR files used in this research are being open sourced at https://nureality.org.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.