Abstract:Human-robot object handovers have been an actively studied area of robotics over the past decade; however, very few techniques and systems have addressed the challenge of handing over diverse objects with arbitrary appearance, size, shape, and rigidity. In this paper, we present a visionbased system that enables reactive human-to-robot handovers of unknown objects. Our approach combines closed-loop motion planning with real-time, temporally-consistent grasp generation to ensure reactivity and motion smoothness… Show more
“…Robots supporting people in their daily activities at home or at the workplace need to accurately and robustly perceive objects, such as containers, and their physical properties, for example when they are manipulated by a person prior to a human-to-robot handover [1,2,3,4,5]. Audio-visual perception should adapt -on-the-fly and with limited or no prior knowledge -to changing conditions in order to guarantee the correct execution of the task and the safety of the person.…”
Human-robot collaboration requires the contactless estimation of the physical properties of containers manipulated by a person, for example while pouring content in a cup or moving a food box. Acoustic and visual signals can be used to estimate the physical properties of such objects, which may vary substantially in shape, material and size, and also be occluded by the hands of the person. To facilitate comparisons and stimulate progress in solving this problem, we present the CORSMAL challenge and a dataset to assess the performance of the algorithms through a set of well-defined performance scores. The tasks of the challenge are the estimation of the mass, capacity, and dimensions of the object (container), and the classification of the type and amount of its content. A novel feature of the challenge is our real-to-simulation framework for visualising and assessing the impact of estimation errors in human-to-robot handovers.
“…Robots supporting people in their daily activities at home or at the workplace need to accurately and robustly perceive objects, such as containers, and their physical properties, for example when they are manipulated by a person prior to a human-to-robot handover [1,2,3,4,5]. Audio-visual perception should adapt -on-the-fly and with limited or no prior knowledge -to changing conditions in order to guarantee the correct execution of the task and the safety of the person.…”
Human-robot collaboration requires the contactless estimation of the physical properties of containers manipulated by a person, for example while pouring content in a cup or moving a food box. Acoustic and visual signals can be used to estimate the physical properties of such objects, which may vary substantially in shape, material and size, and also be occluded by the hands of the person. To facilitate comparisons and stimulate progress in solving this problem, we present the CORSMAL challenge and a dataset to assess the performance of the algorithms through a set of well-defined performance scores. The tasks of the challenge are the estimation of the mass, capacity, and dimensions of the object (container), and the classification of the type and amount of its content. A novel feature of the challenge is our real-to-simulation framework for visualising and assessing the impact of estimation errors in human-to-robot handovers.
“…The estimation of container properties such as its width, height and mass represents a crucial stage, since the robot regulates the force to hold the object during the handover and the maneuvering [2]. Moreover, it is not a trivial task since the object could be unknown [3,4] or the physical properties of the container could change based on the interaction, e.g., deformation due to the grasp, or different stiffness and filling amounts [5].…”
In the research area of human-robot interactions, the automatic estimation of the mass of a container manipulated by a person leveraging only visual information is a challenging task. The main challenges consist of occlusions, different filling materials and lighting conditions. The mass of an object constitutes key information for the robot to correctly regulate the force required to grasp the container. We propose a single RGB-D camera-based method to locate a manipulated container and estimate its empty mass i.e., independently of the presence of the content. The method first automatically selects a number of candidate containers based on the distance with the fixed frontal view, then averages the mass predictions of a lightweight model to provide the final estimation. Results on the CORSMAL Containers Manipulation dataset show that the proposed method estimates empty container mass obtaining a score of 71.08% under different lighting or filling conditions.
“…Robotic deliveries can either be performed indirectly by leaving the object in the vicinity of the human receiver [1,2,3] or directly by handing over the object to them. The majority of research in robotic handovers is focused on direct handovers using fixed-base manipulators [4,5]. On the other hand, mobile robots allow for more control over when and how to approach a human receiver for fetch-and-carry tasks.…”
Existing approaches to direct robot-to-human handovers are typically implemented on fixed-base robot arms, or on mobile manipulators that come to a full stop before performing the handover. We propose "on-the-go" handovers which permit a moving mobile manipulator to hand over an object to a human without stopping. The on-the-go handover motion is generated with a reactive controller that allows simultaneous control of the base and the arm. In a user study, human receivers subjectively assessed on-the-go handovers to be more efficient, predictable, natural, better timed and safer than handovers that implemented a "stop-and-deliver" behavior.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.