2021
DOI: 10.1007/s40313-021-00829-3
|View full text |Cite
|
Sign up to set email alerts
|

A New Mechanism for Collision Detection in Human–Robot Collaboration using Deep Learning Techniques

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
1
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
3
2

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(3 citation statements)
references
References 21 publications
0
1
0
Order By: Relevance
“…Yet, these models were not deployed for human-robot task completion. Further developments include the work by Rodrigues et al [32], which presents a CNN model for detecting collisions between users and cobots with an accuracy of 89%, and by Morrison et al [51], which develops a generative grasping CNNbased model achieving between 88% and 94% accuracy for grasping static and moving objects. Additionally, Sekkat et al [29] utilized a reinforcement learning model for grasping tasks with a 5-DOF robot, reporting 95.5% accuracy, albeit without human interaction.…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…Yet, these models were not deployed for human-robot task completion. Further developments include the work by Rodrigues et al [32], which presents a CNN model for detecting collisions between users and cobots with an accuracy of 89%, and by Morrison et al [51], which develops a generative grasping CNNbased model achieving between 88% and 94% accuracy for grasping static and moving objects. Additionally, Sekkat et al [29] utilized a reinforcement learning model for grasping tasks with a 5-DOF robot, reporting 95.5% accuracy, albeit without human interaction.…”
Section: Discussionmentioning
confidence: 99%
“…This capability has been utilized for route control and pick-and-place routines [29,30]. Additionally, computer vision in cobots has been employed for safety purposes, including gesture recognition and human position tracking [31,32], where vision systems integrated with neural networks are applied to track the body of the user.…”
Section: Introductionmentioning
confidence: 99%
“…The specific standard provides a series of safety guidelines depending on the level of interaction and can be used complementarily to other ISO guidelines associated with robotic processes, such as ISO 10218-1:2011 [ 17 ], ISO 10218-2:2011 [ 18 ], and ISO 13855 [ 19 , 20 ]. Yet, it should be noted that the maturity and readiness of vision systems in industrial environments are still under review, since in some specific scenarios the detection of an operator may be prevented due to occlusions [ 21 , 22 ]. In these cases, using various types of sensors in conjunction with sensor fusion algorithms has been reported as a method to improve the overall perception of the process of human pose estimation for collaborative robotic applications [ 23 ].…”
Section: Introductionmentioning
confidence: 99%