2022
DOI: 10.1016/j.rcim.2021.102310
|View full text |Cite
|
Sign up to set email alerts
|

Imitation learning for coordinated human–robot collaboration based on hidden state-space models

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
1

Relationship

0
6

Authors

Journals

citations
Cited by 8 publications
(1 citation statement)
references
References 31 publications
0
1
0
Order By: Relevance
“…In many human-robot interaction scenarios, robots need to be able to imitate human motions [3,20]. For example, imitating human motions can be used to reproduce human demonstrated motions [9,23], for coordination purposes in human-robot collaboration [26], or even to provide feedback to patients in physical rehabilitation scenarios [4]. This imitation is not a simple one-to-one mapping from human joint angle to motor angle, as the embodiment of humans and robots differ in sizes, proportions, velocities, forces, dynamics.…”
Section: Introductionmentioning
confidence: 99%
“…In many human-robot interaction scenarios, robots need to be able to imitate human motions [3,20]. For example, imitating human motions can be used to reproduce human demonstrated motions [9,23], for coordination purposes in human-robot collaboration [26], or even to provide feedback to patients in physical rehabilitation scenarios [4]. This imitation is not a simple one-to-one mapping from human joint angle to motor angle, as the embodiment of humans and robots differ in sizes, proportions, velocities, forces, dynamics.…”
Section: Introductionmentioning
confidence: 99%