This paper presents a novel human-like learning controller to interact with unknown environments. Strictly derived from the minimization of instability, motion error, and effort, the controller compensates for the disturbance in the environment in interaction tasks by adapting feedforward force and impedance. In contrast with conventional learning controllers, the new controller can deal with unstable situations that are typical of tool use and gradually acquire a desired stability margin. Simulations show that this controller is a good model of human motor adaptation. Robotic implementations further demonstrate its capabilities to optimally adapt interaction with dynamic environments and humans in joint torque controlled robots and variable impedance actuators, without requiring interaction force sensing.
Enabling robots to safely interact with humans is an essential goal of robotics research. The developments achieved over recent years in mechanical design and control made it possible to have active cooperation between humans and robots in rather complex situations. For this, safe robot behavior even under worst-case situations is crucial and forms also a basis for higher-level decisional aspects. For quantifying what safe behavior really means, the definition of injury, as well as understanding its general dynamics, are essential. This insight can then be applied to design and control robots such that injury due to robot-human impacts is explicitly taken into account. In this paper we approach the problem from a medical injury analysis point of view in order to formulate the relation between robot mass, velocity, impact geometry and resulting injury qualified in medical terms. We transform these insights into processable representations and propose a motion supervisor that utilizes injury knowledge for generating safe robot motions. The algorithm takes into account the reflected inertia, velocity, and geometry at possible impact locations. The proposed framework forms a basis for generating truly safe velocity bounds that explicitly consider the dynamic properties of the manipulator and human injury.
In this paper we describe a system for aerial manipulation composed of a helicopter platform and a fully actuated seven Degree of Freedom (DoF) redundant industrial robotic arm. We present the first analysis of such kind of systems and show that the dynamic coupling between helicopter and arm can generate diverging oscillations with very slow frequency which we called phase circles. Based on the presented analysis, we propose a control approach for the whole system. The partial decoupling between helicopter and arm-which eliminates the phase circles-is achieved by means of special movement of robotic arm utilizing its redundant DoF. For the underlying arm control a specially designed impedance controller was proposed. In different flight experiments we showcase that the proposed kind of system type might be used in the future for practically relevant tasks. In an integrated experiment we demonstrate a basic manipulation task-impedance based grasping of an object from the environment underlaying a visual object tracking control loop.
Abstract-In this paper, we present an efficient 3D object recognition and pose estimation approach for grasping procedures in cluttered and occluded environments. In contrast to common appearance-based approaches, we rely solely on 3D geometry information. Our method is based on a robust geometric descriptor, a hashing technique and an efficient, localized RANSAC-like sampling strategy. We assume that each object is represented by a model consisting of a set of points with corresponding surface normals. Our method simultaneously recognizes multiple model instances and estimates their pose in the scene. A variety of tests shows that the proposed method performs well on noisy, cluttered and unsegmented range scans in which only small parts of the objects are visible. The main procedure of the algorithm has a linear time complexity resulting in a high recognition speed which allows a direct integration of the method into a continuous manipulation task. The experimental validation with a 7-degrees-of-freedom Cartesian impedance controlled robot shows how the method can be used for grasping objects from a complex random stack. This application demonstrates how the integration of computer vision and softrobotics leads to a robotic system capable of acting in unstructured and occluded environments.
Abstract-Because bin-picking effectively mirrors great challenges in robotics, it has been a relevant robotic showpiece application for several decades. This paper approaches the binpicking issue by applying the latest state-of-the-art hardware components, namely an impedance controlled light-weight robot and a Time-of-Flight camera. Lightweight robots have gained new capabilities in both sensing and actuation without suffering a decrease in speed and payload. Time-of-Flight cameras are superior to common proximity sensors by providing depth and intensity images in video frame rate independent of textures. Furthermore, the bin-picking process presented here incorporates an environment model and allows for physical human-robot interaction. The existing imprecision in Time-ofFlight camera measurements is compensated by the compliant behavior of the robot. A generic state machine monitors the entire bin-picking process. This paper describes the computer vision algorithms in combination with the sophisticated control schemes of the robot and demonstrates a reliable and robust solution to the chosen problem.
This paper describes a novel method for motion generation and reactive collision avoidance. The algorithm performs arbitrary desired velocity profiles in absence of external disturbances and reacts if virtual or physical contact is made in a unified fashion with a clear physically interpretable behavior. The method uses physical analogies for defining attractor dynamics in order to generate smooth paths even in presence of virtual and physical objects. The proposed algorithm can, due to its low complexity, run in the inner most control loop of the robot, which is absolutely crucial for safe Human Robot Interaction. The method is thought as the locally reactive realtime motion generator connecting control, collision detection and reaction, and global path planning.
In this paper we present a novel control architecture for realizing human-friendly behaviors and intuitive state based programming. The design implements strategies that take advantage of sophisticated soft-robotics features for providing reactive, robust, and safe robot actions in dynamic environments. Quick access to the various functionalities of the robot enables the user to develop flexible hybrid state automata for programming robot behaviors. The real-time robot control takes care of all safety critical aspects and provides reactive reflexes that directly respond to external stimuli.
Enabling robots to safely interact with humans is an essential goal of robotics research. The developments achieved over the last years in mechanical design and control made it possible to have active cooperation between humans and robots in rather complex situations. In these terms, safe behavior of the robot even under worst-case situations is crucial and forms also a basis for higher level decisional aspects. In order to quantify what safe behavior really means, the definition of injury, as well as understanding its general dynamics are essential. This insight can then be applied to design and control robots such that injury due to robot-human impacts is explicitly taken into account. In this paper we approach the problem from a medical injury analysis point of view in order to formulate the relation between robot mass, velocity, impact geometry, and resulting injury qualified in medical terms. We transform these insights into processable representations and propose a motion supervisor that utilizes injury knowledge for generating safe robot motions. The algorithm takes into account the reflected inertia, velocity, and geometry at possible impact locations. The proposed framework forms a basis for generating truly safe velocity bounds that explicitely consider the dynamic properties of the manipulator and human injury
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.