2021
DOI: 10.3390/s21196674
|View full text |Cite
|
Sign up to set email alerts
|

Uncertainty-Aware Knowledge Distillation for Collision Identification of Collaborative Robots

Abstract: Human-robot interaction has received a lot of attention as collaborative robots became widely utilized in many industrial fields. Among techniques for human-robot interaction, collision identification is an indispensable element in collaborative robots to prevent fatal accidents. This paper proposes a deep learning method for identifying external collisions in 6-DoF articulated robots. The proposed method expands the idea of CollisionNet, which was previously proposed for collision detection, to identify the l… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
3
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
1

Relationship

0
6

Authors

Journals

citations
Cited by 6 publications
(3 citation statements)
references
References 48 publications
0
3
0
Order By: Relevance
“…To produce the Hard Loss in (12), the student model takes the predictions P and maps it with ground truth labels G produced from the modified softmax τ>1. The Hard Loss uses a standard categorical-cross entropy loss CCE Loss [34] with labels set to >2, as the proposed method has six classes. …”
Section: Distilling Knowledgementioning
confidence: 99%
“…To produce the Hard Loss in (12), the student model takes the predictions P and maps it with ground truth labels G produced from the modified softmax τ>1. The Hard Loss uses a standard categorical-cross entropy loss CCE Loss [34] with labels set to >2, as the proposed method has six classes. …”
Section: Distilling Knowledgementioning
confidence: 99%
“…These engineers will also need to design connected systems that can efficiently and safely interact with humans during the manufacturing process, e.g., a car assembly line [ 3 ]. The focal points of this Special Issue are the smart sensors that enable robots and humans to “see” each other [ 4 , 5 , 6 , 7 , 8 , 9 ] and the machine learning algorithms that process these complex data so the robot can make decisions [ 10 , 11 , 12 , 13 ].…”
mentioning
confidence: 99%
“…A neural network model has been previously developed to determine when a collision has occurred [ 17 ] so the robot can adjust its force and avoid an accident. Kwon expanded this neural network to include where on the robot the collision occurred [ 11 ]. This work is important for safety, especially as robots become more complicated with more articulations.…”
mentioning
confidence: 99%