Abstract:Physical contact inevitably occurs during robot interaction with outside environments. A robot should have the ability to detect and distinguish whether a physical interaction between a human and the robot is contact or collision, so as to ensure human safety and improve interaction performance. In this paper, a virtual sensor that can detect and distinguish contact and collision between humans and industrial robots is proposed. Based on the generalized momentum of the robot, two observers with low-pass and ba… Show more
“…The authors use filters of current signals (CF) to identify the type of collision along with statistically determined thresholds for the processed torque signal. While in the work of [ 29 ] a method based on low-pass and band-pass filtering was proposed.…”
Due to the epidemic threat, more and more companies decide to automate their production lines. Given the lack of adequate security or space, in most cases, such companies cannot use classic production robots. The solution to this problem is the use of collaborative robots (cobots). However, the required equipment (force sensors) or alternative methods of detecting a threat to humans are usually quite expensive. The article presents the practical aspect of collision detection with the use of a simple neural architecture. A virtual force and torque sensor, implemented as a neural network, may be useful in a team of collaborative robots. Four different approaches are compared in this article: auto-regressive (AR), recurrent neural network (RNN), convolutional long short-term memory (CNN-LSTM) and mixed convolutional LSTM network (MC-LSTM). These architectures are analyzed at different levels of input regression (motor current, position, speed, control velocity). This sensor was tested on the original CURA6 robot prototype (Cooperative Universal Robotic Assistant 6) by Intema. The test results indicate that the MC-LSTM architecture is the most effective with the regression level set at 12 samples (at 24 Hz). The mean absolute prediction error obtained by the MC-LSTM architecture was approximately 22 Nm. The conducted external test (72 different signals with collisions) shows that the presented architecture can be used as a collision detector. The MC-LSTM collision detection f1 score with the optimal threshold was 0.85. A well-developed virtual sensor based on such a network can be used to detect various types of collisions of cobot or other mobile or stationary systems operating on the basis of human-machine interaction.
“…The authors use filters of current signals (CF) to identify the type of collision along with statistically determined thresholds for the processed torque signal. While in the work of [ 29 ] a method based on low-pass and band-pass filtering was proposed.…”
Due to the epidemic threat, more and more companies decide to automate their production lines. Given the lack of adequate security or space, in most cases, such companies cannot use classic production robots. The solution to this problem is the use of collaborative robots (cobots). However, the required equipment (force sensors) or alternative methods of detecting a threat to humans are usually quite expensive. The article presents the practical aspect of collision detection with the use of a simple neural architecture. A virtual force and torque sensor, implemented as a neural network, may be useful in a team of collaborative robots. Four different approaches are compared in this article: auto-regressive (AR), recurrent neural network (RNN), convolutional long short-term memory (CNN-LSTM) and mixed convolutional LSTM network (MC-LSTM). These architectures are analyzed at different levels of input regression (motor current, position, speed, control velocity). This sensor was tested on the original CURA6 robot prototype (Cooperative Universal Robotic Assistant 6) by Intema. The test results indicate that the MC-LSTM architecture is the most effective with the regression level set at 12 samples (at 24 Hz). The mean absolute prediction error obtained by the MC-LSTM architecture was approximately 22 Nm. The conducted external test (72 different signals with collisions) shows that the presented architecture can be used as a collision detector. The MC-LSTM collision detection f1 score with the optimal threshold was 0.85. A well-developed virtual sensor based on such a network can be used to detect various types of collisions of cobot or other mobile or stationary systems operating on the basis of human-machine interaction.
“…If that signal comes from a low-band filter, the contact is desired. On the contrary, if the signal comes from a band-pass or a high-pass filter, the contact is a non-desired collision [82], [83].…”
The demand for collaborative robots is growing in industrial environments due to their versatility and low prices. Thus, more collaborative solutions are emerging for industrial scenarios. However, implementing scenarios where robots work autonomously while synchronizing their operations in a safe industrial environment with shop-floor workers is not easy. To fill the gap existing in the safe implementation of industrial collaborative scenarios, this manuscript presents a review based on five identified challenges that gathers the primary vital aspects to bear in mind while developing applications for them. Thus, a four-level classification is proposed, which collects the identified challenges and the previous developments in the field of human-robot interaction. The five identified challenges pretends to be the missing enabling key for implementing industrial collaborative scenarios in modern industrial plants. Lastly, a discussion and conclusion are exposed to analyze the degree of development in the field and its potential growth.
“…However, their approach is limited to cases when the dynamics of collision is significantly faster than the dynamics of the task, moreover, band-pass filtering modifies estimated disturbance torques making them impossible to use, for example, in collision localization. Haddadin et al (2008) and Li et al (2019) proposed to use two observers, one to detect slow or soft collisions using low-pass filtered collision torques and one to detect fast collisions using band-pass filtered collision torques. Sotoudehnejad et al (2012) proposed using time-variant thresholds that take into account uncertainties in inertial parameters of the robot as well as friction parameters.…”
Section: Collision Detectionmentioning
confidence: 99%
“…Theoretically, thresholds can be significantly reduced for fast collisions—collisions that have a distinguishingly higher frequency than the motion of robot—by a band-pass filtering, the dynamics of the robot (Ho and Song, 2013 ; Li et al, 2019 ). However, in our experiments by applying band-pass filter we were not able to reduce thresholds and to obtain faster collision detection time without getting false positives.…”
Recently, with the increased number of robots entering numerous manufacturing fields, a considerable wealth of literature has appeared on the theme of physical human-robot interaction using data from proprioceptive sensors (motor or/and load side encoders). Most of the studies have then the accurate dynamic model of a robot for granted. In practice, however, model identification and observer design proceeds collision detection. To the best of our knowledge, no previous study has systematically investigated each aspect underlying physical human-robot interaction and the relationship between those aspects. In this paper, we bridge this gap by first reviewing the literature on model identification, disturbance estimation and collision detection, and discussing the relationship between the three, then by examining the practical sides of model-based collision detection on a case study conducted on UR10e. We show that the model identification step is critical for accurate collision detection, while the choice of the observer should be mostly based on computation time and the simplicity and flexibility of tuning. It is hoped that this study can serve as a roadmap to equip industrial robots with basic physical human-robot interaction capabilities.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.