2021 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) 2021
DOI: 10.1109/iros51168.2021.9636828
|View full text |Cite
|
Sign up to set email alerts
|

Learning Forceful Manipulation Skills from Multi-modal Human Demonstrations

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
7
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
2

Relationship

1
5

Authors

Journals

citations
Cited by 13 publications
(7 citation statements)
references
References 22 publications
0
7
0
Order By: Relevance
“…Human emotion recognition for natural HRI [153,157,158] -Voice user interface for in-vehicle interaction [174] -Audio-based motion generation for HRI [195] User-oriented programming of collaborative robots [96] Audio-vision speech recognition for HRI of the industrial robot and HMI [56,73] VLN for autonomous agent interaction with human and environment [167] Haptics Teaching by demonstration task for HRC [196] -Hardness, temperature and roughness feedback for robot hand controlling and VR applications (haptic glove) [118] Ion-electronic skin providing realtime force directions and strain profiles for various tactile motions (shear, pinch, spread, torsion…) [101] Slip detection and re-grasp planning of unknown objects for robot robust grasping [88] A wearable glove for hand pose and sensory inputs identification for HRI [117] Multimodal robotic sensing system (M-Bot) for HMI [105] PBVS with collision avoidance for safe pHRC [170] Robot self-learning for complex manipulations skills (play Jenga) [89] Robotic manipulators learning from demonstrations for industrial processes such as assembly [108] In-hand pose estimation for robotic assembly [110] Physiology Human activity recognition (MR glass) for hands-free HRI [197] Hand gesture recognition for HMI [97] Wearable glove (hand pose reconstruction, identify sensory inputs such as holding force, object temperature, conductibility, material stiffness, user hear rate) [117] Gesture-based control (EMG þ EEG) to detect and correct robot mistakes for HRI target selection tasks [198] Human motion intention recognition for HRC sawing task [172] Human emotion recognition for HRI and HCI [125,152,157] Lower limb movement prediction for HRI [199] 6.1.…”
Section: Combination Of Two Types Of Modalitiesmentioning
confidence: 99%
See 3 more Smart Citations
“…Human emotion recognition for natural HRI [153,157,158] -Voice user interface for in-vehicle interaction [174] -Audio-based motion generation for HRI [195] User-oriented programming of collaborative robots [96] Audio-vision speech recognition for HRI of the industrial robot and HMI [56,73] VLN for autonomous agent interaction with human and environment [167] Haptics Teaching by demonstration task for HRC [196] -Hardness, temperature and roughness feedback for robot hand controlling and VR applications (haptic glove) [118] Ion-electronic skin providing realtime force directions and strain profiles for various tactile motions (shear, pinch, spread, torsion…) [101] Slip detection and re-grasp planning of unknown objects for robot robust grasping [88] A wearable glove for hand pose and sensory inputs identification for HRI [117] Multimodal robotic sensing system (M-Bot) for HMI [105] PBVS with collision avoidance for safe pHRC [170] Robot self-learning for complex manipulations skills (play Jenga) [89] Robotic manipulators learning from demonstrations for industrial processes such as assembly [108] In-hand pose estimation for robotic assembly [110] Physiology Human activity recognition (MR glass) for hands-free HRI [197] Hand gesture recognition for HMI [97] Wearable glove (hand pose reconstruction, identify sensory inputs such as holding force, object temperature, conductibility, material stiffness, user hear rate) [117] Gesture-based control (EMG þ EEG) to detect and correct robot mistakes for HRI target selection tasks [198] Human motion intention recognition for HRC sawing task [172] Human emotion recognition for HRI and HCI [125,152,157] Lower limb movement prediction for HRI [199] 6.1.…”
Section: Combination Of Two Types Of Modalitiesmentioning
confidence: 99%
“…Such sensors are fundamentally critical for robot action intelligence since robot control in virtually all HRC tasks depends on at least F/T sensor feedback. [ 67,88,89,108–110 ]…”
Section: Interfacementioning
confidence: 99%
See 2 more Smart Citations
“…Various teaching methods can be used such as kinesthetic teaching in [5], tele-operation in [8], and visual demonstration in [7]. Different skill models are proposed to abstract these demonstrations: full trajectory of robot end-effector in [6], dynamic movement primitives (DMPs) in [9], [10], task-parameterized Gaussian mixture models (TP-GMMs) in [5], [11] which extend GMMs by incorporating observations from different perspectives (so called task parameters), task-parametrized hidden semi-Markov models (TP-HSMMs) in [12], [13], [14], and deep neural networks [7]. In this work, we adopt the TP-HSMM representation to extract both temporal and spatial features from few human teachings, while allowing generalization over multiple task parameters.…”
Section: A Learning From Demonstrationmentioning
confidence: 99%