2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) 2018
DOI: 10.1109/iros.2018.8594433
|View full text |Cite
|
Sign up to set email alerts
|

Cost Functions for Robot Motion Style

Abstract: We focus on autonomously generating robot motion for day to day physical tasks that is expressive of a certain style or emotion. Because we seek generalization across task instances and task types, we propose to capture style via cost functions that the robot can use to augment its nominal task cost and task constraints in a trajectory optimization process. We compare two approaches to representing such cost functions: a weighted linear combination of hand-designed features, and a neural network parameterizati… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
13
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
5
2
1

Relationship

1
7

Authors

Journals

citations
Cited by 16 publications
(13 citation statements)
references
References 18 publications
(22 reference statements)
0
13
0
Order By: Relevance
“…The optimization cost function can be formulated in different ways. First, it can be defined or learned to directly generate expressive motion (e.g., [104]) or, second, defined as a measure of how well people understand the robot, using an explicitly defined model of user inferences. For example, in Dragan and Srinivasa [17], the authors propose a mathematical formulation of a legibility functional, which is optimized to generate legible trajectories.…”
Section: Formulatedmentioning
confidence: 99%
“…The optimization cost function can be formulated in different ways. First, it can be defined or learned to directly generate expressive motion (e.g., [104]) or, second, defined as a measure of how well people understand the robot, using an explicitly defined model of user inferences. For example, in Dragan and Srinivasa [17], the authors propose a mathematical formulation of a legibility functional, which is optimized to generate legible trajectories.…”
Section: Formulatedmentioning
confidence: 99%
“…From a robot's angle, it models humans by inferring goals [36], [37], tracking mental states [38], [39], predicting actions [40], and recognizing intention and attention [41], [42]. From a human agent's perspective, the robot needs to be more expressed [43], to promote human trust [44], to assist properly [45], [46], and to generate proper explanations of its behavior [44]. We believe the proposed shared AR workspace is an ideal platform for evaluating and benchmarking existing and new algorithms and models.…”
Section: Quantitative Resultsmentioning
confidence: 99%
“…Getting robots and virtual avatars to exhibit realistic looking and human-recognizable motions is a well-studied problem, from conveying intent in a task [4], [9], [10], to communicating incapability [11], [12], to expressing emotions [5], [13], [14]. In this section, we focus our attention on literature from the latter category, as our goal is enabling robots to learn emotive styles for performing functional tasks.…”
Section: Related Workmentioning
confidence: 99%
“…The motions in these methods are hand-crafted and, therefore, specific to the system and task they are being designed for. To generalize to a more diverse set of tasks, recent methods [5], [18], [19] try to learn a cost function that when optimized produces the desired emotive motion. However, these methods requires collecting labels for each emotion one at a time, resulting in inefficient and costly learning that fails to generalize to new emotions.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation