2019 14th ACM/IEEE International Conference on Human-Robot Interaction (HRI) 2019
DOI: 10.1109/hri.2019.8673310
|View full text |Cite
|
Sign up to set email alerts
|

Characterizing Input Methods for Human-to-Robot Demonstrations

Abstract: Human demonstrations are important in a range of robotics applications, and are created with a variety of input methods. However, the design space for these input methods has not been extensively studied. In this paper, focusing on demonstrations of hand-scale object manipulation tasks to robot arms with two-finger grippers, we identify distinct usage paradigms in robotics that utilize human-to-robot demonstrations, extract abstract features that form a design space for input methods, and characterize existing… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
6
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
6
2

Relationship

2
6

Authors

Journals

citations
Cited by 13 publications
(6 citation statements)
references
References 49 publications
(91 reference statements)
0
6
0
Order By: Relevance
“…Our approach requires only observations of the end effector and can infer the constraint and slip parameters without knowledge of the manipulated object or additional environment sensing. Specifically, our methods take position and wrench trajectories of the end effector as input, for example recorded from instrumented tongs [1] (Figure 1). Wrench information affords a way to identify relative slip between the end effector and manipulated object via estimates of the friction and grip force, while also providing reaction information that helps to identify underlying constraints.…”
Section: Robot Leverages Semantics Of the Demonstrationmentioning
confidence: 99%
See 1 more Smart Citation
“…Our approach requires only observations of the end effector and can infer the constraint and slip parameters without knowledge of the manipulated object or additional environment sensing. Specifically, our methods take position and wrench trajectories of the end effector as input, for example recorded from instrumented tongs [1] (Figure 1). Wrench information affords a way to identify relative slip between the end effector and manipulated object via estimates of the friction and grip force, while also providing reaction information that helps to identify underlying constraints.…”
Section: Robot Leverages Semantics Of the Demonstrationmentioning
confidence: 99%
“…Uncovering the semantics of both slip and an underlying geometric constraint can allow a system to determine an appropriate way to execute the task, such as *This work was supported in part by NSF award 1830242 and the University of Wisconsin-Madison Office of the Vice Chancellor for Research and Graduate Education with funding from the Wisconsin Alumni Research Foundation. 1 Michael Hagenow, Bolun Zhang, and Michael Zinn are with the Department of Mechanical Engineering, University of Wisconsin-Madison, Madison 53706, USA [mhagenow|bzhang65|mzinn]@wisc.edu 2 Bilge Mutlu and Michael Gleicher are with the Department of Computer Sciences, University of Wisconsin-Madison, Madison 53706, USA…”
Section: Introductionmentioning
confidence: 99%
“…The study took 60 minutes and all participants received $12 as compensation. To acquire the resulting pose and wrench measurements that occur during constraint interactions, subjects used custom instrumented tongs [6] equipped with force-torque sensors (ATI mini40) and motion capture markers (Opti-track Flex 13) to perform demonstrations. The instrumented tongs, shown in Fig.…”
Section: A Experimental Setupmentioning
confidence: 99%
“…This makes such control difficult to use, limiting the ability of such systems to perform tasks involving constraints. One solution is to infer constraints from a human demonstration, which eliminates the need for explicit programming [5], [6]. This paper presents an approach to 1: Top: Demonstration of an espresso making task using instrumented tongs consisting of sliding an espresso cup, pulling out a drawer, and actuating an espresso lever.…”
Section: Introductionmentioning
confidence: 99%
“…UTAUT and UTAUT2 have been used in a wide variety of studies to predict usage behavior or assess usage intention of information technology. Examples include two-factor authentication [6], smart payment cards [12], internet of things [19], gaming [24,27], Egovernment [16], social media [39,47] and robotic technology [43]. More recent researches [48,49] have shown that constructs and determinants of usage intention have changed with time and advancement in IT.…”
Section: Related Workmentioning
confidence: 99%