Recent works in the domain of Human-Robot Motion (HRM) attempted to plan collision avoidance behavior that accounts for cooperation between agents. Cooperative collision avoidance between humans and robots should be conducted under several factors such as speed, heading and also human attention and intention. Based on some of these factors, people decide their crossing order during collision avoidance. However, whenever situations arise in which the choice crossing order is not consistent for people, the robot is forced to account for the possibility that both agents will assume the same role i.e. a decision detrimental to collision avoidance. In our work we evaluate the boundary that separates the decision to avoid collision as first or last crosser. Approximating the uncertainty around this boundary allows our collision avoidance strategy to address this problem based on the insight that the robot should plan its collision avoidance motion in such a way that, even if agents, at first, incorrectly choose the same crossing order, they would be able to unambiguously perceive their crossing order on their following collision avoidance action.
This paper is about Human Robot Motion (HRM), i.e. the study of how a robot should move among humans. This problem has often been solved by considering persons as moving obstacles, predicting their future trajectories and avoiding these trajectories. In contrast with such an approach, recent works have showed benefits of robots that can move and avoid collisions in a manner similar to persons, what we call human-like motion. One such benefit is that human-like motion was shown to reduce the planning effort for all persons in the environment, given that they tend to solve collision avoidance problems in similar ways. The effort required for avoiding a collision, however, is not shared equally between agents as it varies depending on factors such as visibility and crossing order. Thus, this work tackles HRM using the notion of motion effort and how it should be shared between the robot and the person in order to avoid collisions. To that end our approach learns a robot behavior using Reinforcement Learning that enables it to mutually solve the collision avoidance problem during our simulated trials.
Recent works in the area of human robot motion showed that behaving in a human-like manner allows a robot to reduce global cognitive effort for people in the environment. Given that collision avoidance situations between people are solved cooperatively, this work models the manner in which this cooperation is done so that a robot can replicate their behavior. To that end, hundreds of situations where two walkers have crossing trajectories were analyzed. Based on these human trajectories involving a collision avoidance task, we determined how total effort is shared between each walker depending on several factors of the interaction such as crossing angle, time to collision and speed. To validate our approach, a proof of concept is integrated into ROS with Reciprocal Velocity Objects (RVO) in order to distribute collision avoidance effort in a human-like way.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.