2019
DOI: 10.1109/lra.2019.2928782
|View full text |Cite
|
Sign up to set email alerts
|

Intuitive Task-Level Programming by Demonstration Through Semantic Skill Recognition

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
27
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
6
3

Relationship

1
8

Authors

Journals

citations
Cited by 40 publications
(27 citation statements)
references
References 21 publications
0
27
0
Order By: Relevance
“…To address this confusion, for the final evaluated version of our tool, each Destination was manipulated directly, and an indicator showed when it was reachable by the robot Agent. However, for future versions of Authr, this capability may be added back in, for when users have a physical robot on the scene and wish to use the physical robot to configure the destinations and object locations more easily, much like interfaces such as Polyscope and RAZER [42]. to the goal location.…”
Section: Resultsmentioning
confidence: 99%
See 1 more Smart Citation
“…To address this confusion, for the final evaluated version of our tool, each Destination was manipulated directly, and an indicator showed when it was reachable by the robot Agent. However, for future versions of Authr, this capability may be added back in, for when users have a physical robot on the scene and wish to use the physical robot to configure the destinations and object locations more easily, much like interfaces such as Polyscope and RAZER [42]. to the goal location.…”
Section: Resultsmentioning
confidence: 99%
“…While the interface was successful in allowing users to specify complex programs, users had difficulties understanding the types and intentions of the robots' actions. RAZER was designed for task-level programming to allow shop-floor operators to leverage lowerlevel actions developed by experts, and it was later extended to support programming by demonstration [43,42]. They compared their solution with systems such as CoSTAR and Scratch, finding RAZER to be easier to understand by nonexperts.…”
Section: Related Workmentioning
confidence: 99%
“…Based on the insights of this work, we argue that adaptable haptically augmented VFs explain a task more accurately than oral or visual explanations. In the future, the tasks to be trained could be learned from expert demonstrations ( Steinmetz et al, 2019 ). To achieve this, an expert surgeon could perform a task first, allowing the abstraction of skills and corresponding support functions.…”
Section: Discussionmentioning
confidence: 99%
“…Current approaches differentiate between a low-level trajectories encoding and a high-level symbolic encoding. By monitoring and comparing the user's task demonstrations to pre-or postconditions of a predefined behavior set, a symbolic encoding partitions the task in action segments [11] [12]. In [8] this step is followed by arranging the segments of different task demonstrations in a generalized topology, while in [12], a recognized sequence of parameterized skills is propagated to a graphical interface for further processing by the user.…”
Section: Related Workmentioning
confidence: 99%