Human-robot collaboration is gaining more and more interest in industrial settings, as collaborative robots are considered safe and robot actions can be programmed easily by, for example, physical interaction. Despite this, robot programming mostly focuses on automated robot motions and interactive tasks or coordination between human and robot still requires additional developments. For example, the selection of which tasks or actions a robot should do next might not be known beforehand or might change at the last moment. Within a human-robot collaborative setting, the coordination of complex shared tasks, is therefore more suited to a human, where a robot would act upon requested commands.In this work we explore the utilization of commands to coordinate a shared task between a human and a robot, in a shared work space. Based on a known set of higher-level actions (e.g., pick-and-placement, hand-over, kitting) and the commands that trigger them, both a speech-based and graphical command-based interface are developed to investigate its use. While speech-based interaction might be more intuitive for coordination, in industrial settings background sounds and noise might hinder its capabilities. The graphical command-based interface circumvents this, while still demonstrating the capabilities of coordination. The developed architecture follows a knowledge-based approach, where the actions available to the robot are checked at runtime whether they suit the task and the current state of the world. Experimental results on industrially relevant assembly, kitting and hand-over tasks in a laboratory setting demonstrate that graphical command-based and speech-based coordination with high-level commands is effective for collaboration between a human and a robot.
Robotic systems developed for support can provide assistance in various ways. However, regardless of the service provided, the quality of user interaction is key to adoption by the general public. Simple communication difficulties, such as terminological differences, can make or break the acceptance of robots. In this work we take into account these difficulties in communication between a human and a robot. We propose a system that allows to handle unknown concepts through symbol manipulation based on natural language interactions. In addition, ontologies are used as a convenient way to store the knowledge and reason about it. To demonstrate the use of our system, two scenarios are described and tested with a Care-O-Bot 4. The experiments show that confusions and difficulties in communication can effectively be resolved through symbol manipulation.
Robot object grasping and handling requires accurate grasp pose estimation and gripper/end-effector design, tailored to individual objects. When object shape is unknown, cannot be estimated, or is highly complex, parallel grippers can provide insufficient grip. Compliant grippers can circumvent these issues through the use of soft or flexible materials that adapt to the shape of the object. This paper proposes a 3D printable soft gripper design for handling complex shapes. The compliant properties of the gripper enable contour conformation, yet offer tunable mechanical properties (i.e., directional stiffness). Objects that have complex shape, such as non-constant curvature, convex and/or concave shape can be grasped blind (i.e., without grasp pose estimation). The motivation behind the gripper design is handling of industrial parts, such as jet and Diesel engine components. (Dis)assembly, cleaning and inspection of such engines is a complex, manual task that can benefit from (semi-)automated robotic handling. The complex shape of each component, however, limits where and how it can be grasped. The proposed soft gripper design is tunable by compliant cell stacks that deform to the shape of the handled object. Individual compliant cells and cell stacks are characterized and a detailed experimental analysis of more than 600 grasps with seven different industrial parts evaluates the approach.
Recent advances in robotics allow for collaboration between humans and machines in performing tasks at home or in industrial settings without harming the life of the user. While humans can easily adapt to each other and work in team, it is not as trivial for robots. In their case, interaction skills typically come at the cost of extensive programming and teaching. Besides, understanding the semantics of a task is necessary to work efficiently and react to changes in the task execution process. As a result, in order to achieve seamless collaboration, appropriate reasoning, learning skills and interaction capabilities are needed. For us humans, a cornerstone of our communication is language that we use to teach, coordinate and communicate. In this paper we thus propose a system allowing (i) to teach new action semantics based on the already available knowledge and (ii) to use natural language communication to resolve ambiguities that could arise while giving commands to the robot. Reasoning then allows new skills to be performed either autonomously or in collaboration with a human. Teaching occurs through a web application and motions are learned with physical demonstration of the robotic arm. We demonstrate the utility of our system in two scenarios and reflect upon the challenges that it introduces.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.