Although the concept of industrial cobots dates back to 1999, most present day hybrid human-machine assembly systems are merely weight compensators. Here, we present results on the development of a collaborative human-robot manufacturing cell for homokinetic joint assembly. The robot alternates active and passive behaviours during assembly, to lighten the burden on the operator in the first case, and to comply to his/her needs in the latter. Our approach can successfully manage direct physical contact between robot and human, and between robot and environment. Furthermore, it can be applied to standard position (and not torque) controlled robots, common in the industry. The approach is validated in a series of assembly experiments. The human workload is reduced, diminishing the risk of strain injuries. Besides, a complete risk analysis indicates that the proposed setup is compatible with the safety standards, and could be certified.
In the light of factories of the future, to ensure productive and safe interaction between robot and human coworkers, it is imperative that the robot extracts the essential information of the coworker. We address this by designing a reliable framework for real-time safe human-robot collaboration, using static hand gestures and 3D skeleton extraction. OpenPose library is integrated with Microsoft Kinect V2, to obtain a 3D estimation of the human skeleton. With the help of 10 volunteers, we recorded an image dataset of alphanumeric static hand gestures, taken from the American Sign Language. We named our dataset OpenSign and released it to the community for benchmarking. Inception V3 convolutional
Humans use contacts in the environment to modify the shape of deformable objects. Yet, few papers have studied the use of contacts in robotic manipulation. In this paper, we investigate the problem of robotic manipulation of cables with environmental contacts. Instead of avoiding contacts, we propose a framework that allows the robot to use them for shaping the cable. We introduce an index to quantify the contact mobility of a cable with a circular contact. Based on this index, we present a planner to plan robot motions. The planner is aided by a visionbased contact detector. The framework is validated with robot experiments on different desired cable configurations.
For intuitive human-robot collaboration, the robot must quickly adapt to the human behavior. To this end, we propose a multimodal sensor-based control framework, enabling a robot to recognize human intention, and consequently adapt its control strategy. Our approach is marker-less, relies on a Kinect and on an on-board camera, and is based on a unified task formalism. Moreover, we validate it in a mock-up industrial scenario, where human and robot must collaborate to insert screws in a flank.
In human-robot interaction, the robot controller must reactively adapt to sudden changes in the environment (due to unpredictable human behaviour). This often requires operating different modes, and managing sudden signal changes from heterogeneous sensor data. In this paper, we present a multimodal sensor-based controller, enabling a robot to adapt to changes in the sensor signals (here, changes in the human collaborator behaviour). Our controller is based on a unified task formalism, and in contrast with classical hybrid visionforce-position control, it enables smooth transitions and weighted combinations of the sensor tasks. The approach is validated in a mock-up industrial scenario, where pose, vision (from both traditional camera and Kinect), and force tasks must be realized either exclusively or simultaneously, for human-robot collaboration.
This paper introduces BAZAR, a collaborative robot that integrates the most advanced sensing and actuating devices in a unique system designed for the Industry 4.0. We present BAZAR's three main features, which are all paramount in the factory of the future. These features are: mobility for navigating in dynamic environments, interaction for operating side-by-side with human workers and dual arm manipulation for transporting and assembling bulky objects. Keywords Efficient, flexible and modular production • Robotics • Smart Assembly • Human-robot co-working • Real industrial world case studies • Digital Manufacturing and Assembly System • Machine Learning.
We present a unique framework for manipulating both rigid and deformable objects.• Our framework is model-free and requires a short initialization phase.• Our framework does not require camera calibration, and works with different camera poses.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.