2018
DOI: 10.1109/access.2018.2884793
|View full text |Cite
|
Sign up to set email alerts
|

Towards Robust Human-Robot Collaborative Manufacturing: Multimodal Fusion

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
16
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
6
4

Relationship

1
9

Authors

Journals

citations
Cited by 71 publications
(24 citation statements)
references
References 42 publications
0
16
0
Order By: Relevance
“…Different sensing techniques provide different aspects for varying interests [130]. For example, Fig.…”
Section: Smart Sensor Network and Sensor Data Fusionmentioning
confidence: 99%
“…Different sensing techniques provide different aspects for varying interests [130]. For example, Fig.…”
Section: Smart Sensor Network and Sensor Data Fusionmentioning
confidence: 99%
“…RL is used in the estimation of factors such as human kinematics via a recursive least-square (RLS) algorithm. In a more comprehensive use of AI in monitoring [88], a deep learning-based multimodal fusion architecture is developed for the robust operation of an HRC manufacturing system. Three modes of communication between humans and robots are considered in this work: voice, hand motion, and body motion.…”
Section: Operatementioning
confidence: 99%
“…However, achieving natural interaction is not a simple task because it requires continuous monitoring of the surroundings and surveilling the communication flow direction to assure safety while interacting in the communication [58]. Researchers defend several approaches such as voice commanding, gesture recognition [59], collision avoidance and human-aware navigation, among other solutions to achieve natural interaction [1], [29], [60]- [62].…”
Section: ) Operation Levelmentioning
confidence: 99%