2018
DOI: 10.48550/arxiv.1807.11154
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

HARMONIC: A Multimodal Dataset of Assistive Human-Robot Collaboration

Abstract: We present HARMONIC, a large multi-modal dataset of human interactions in a shared autonomy setting. The dataset provides human, robot, and environment data streams from twenty-four people engaged in an assistive eating task with a 6 degree-of-freedom (DOF) robot arm. From each participant, we recorded video of both eyes, egocentric video from a head-mounted camera, joystick commands, electromyography from the participant's forearm used to operate the joystick, third person stereo video, and the joint position… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
15
0

Year Published

2021
2021
2022
2022

Publication Types

Select...
3
2

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(15 citation statements)
references
References 0 publications
0
15
0
Order By: Relevance
“…Latent Actions. Prior work on shared autonomy assumes a pre-defined teleoperation mapping with multiple modes [7], [28]. Consider using a 2-DoF joystick to control a robot arm: your joystick moves the robot's end-effector along the x-y axes in one mode, in z-roll axes in another mode, and so on.…”
Section: Problem Statementmentioning
confidence: 99%
See 1 more Smart Citation
“…Latent Actions. Prior work on shared autonomy assumes a pre-defined teleoperation mapping with multiple modes [7], [28]. Consider using a 2-DoF joystick to control a robot arm: your joystick moves the robot's end-effector along the x-y axes in one mode, in z-roll axes in another mode, and so on.…”
Section: Problem Statementmentioning
confidence: 99%
“…Within End-Effector participants directly controlled the velocity of the robot's end-effector. They pressed a button to toggle between two different modes: one mode controlled the robot's linear velocity, and the other controlled the robot's angular velocity [7], [28]. By contrast, with Ours the robot mapped the user's 2-DoF joystick input to joint velocities.…”
Section: User Studymentioning
confidence: 99%
“…Prior work on assistive arms leverages shared autonomy, where the robot's action is a combination of the human's input and autonomous assistance [16,25,24,7]. Here the human controls the robot with a low-DoF interface (typically a joystick), and the robot leverages a pre-defined mapping with toggled modes to convert the human's inputs into end-effector motion [3,22,37]. To assist the human, the robot maintains a belief over a discrete set of possible goal objects in the environment: the robot continually updates this belief by leveraging the human's joystick inputs as evidence in a Bayesian framework [16,25,24,20,38].…”
Section: Shared Autonomymentioning
confidence: 99%
“…Existing work on assistive robots tackles this problem with pre-defined mappings between user inputs and robot actions. These mappings incorporate modes, and the user switches between modes to control different robot DoFs [22,3,37]. For instance, in one mode the user's 2-DoF joystick controls the x-y position of the end-effector, in a second mode the joystick controls the z-yaw position of the end-effector, and so on.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation