2013 IEEE/RSJ International Conference on Intelligent Robots and Systems 2013
DOI: 10.1109/iros.2013.6696664
|View full text |Cite
|
Sign up to set email alerts
|

Multimodal control for human-robot cooperation

Abstract: For intuitive human-robot collaboration, the robot must quickly adapt to the human behavior. To this end, we propose a multimodal sensor-based control framework, enabling a robot to recognize human intention, and consequently adapt its control strategy. Our approach is marker-less, relies on a Kinect and on an on-board camera, and is based on a unified task formalism. Moreover, we validate it in a mock-up industrial scenario, where human and robot must collaborate to insert screws in a flank.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
27
0

Year Published

2014
2014
2022
2022

Publication Types

Select...
5
2
1

Relationship

2
6

Authors

Journals

citations
Cited by 31 publications
(27 citation statements)
references
References 14 publications
(18 reference statements)
0
27
0
Order By: Relevance
“…To this end, many pHRI works rely on the physical contact (touch) between robot and operator [31]. More recently, to guarantee interaction even in the absence of direct contact, researchers have proposed the use of pointing gestures [32], as well as the integration of vision with force [33,34,35]. Also, in our work, interaction includes both vision and force.…”
Section: Research On Physical Human-robot Collaborationmentioning
confidence: 99%
“…To this end, many pHRI works rely on the physical contact (touch) between robot and operator [31]. More recently, to guarantee interaction even in the absence of direct contact, researchers have proposed the use of pointing gestures [32], as well as the integration of vision with force [33,34,35]. Also, in our work, interaction includes both vision and force.…”
Section: Research On Physical Human-robot Collaborationmentioning
confidence: 99%
“…In (27), Λ * is the diagonal gain matrix applied when s is close to s * , and α ≥ 0 and β ∈ ]0, 1] are two scalar parameters such that, as the task error norm ||s * − s|| increases, Λ exponentially decreases (with slope dependent on α) to βΛ * , for very large task error. This exponential trend compensates that of the error signal, thus generating a less variable control inputq, as will be shown by the experiments.…”
Section: Control Frameworkmentioning
confidence: 99%
“…In our previous work [27], we have started the design of a multimodal framework for human-robot cooperation. The approach is marker-less, and has been validated in a mock-up industrial scenario.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…Finally, Cherubini and al. [15] use also the Kinect and one another camera rigidly linked to the robot's end effector for human intention recognition and human-robot collaboration in industry environment. The system is based on a multimodal control approach and a state machine.…”
Section: A Camera-based Motion Trakingmentioning
confidence: 99%