2015 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) 2015
DOI: 10.1109/iros.2015.7353529
|View full text |Cite
|
Sign up to set email alerts
|

Facilitating intention prediction for humans by optimizing robot motions

Abstract: Members of a team are able to coordinate their actions by anticipating the intentions of others. Achieving such implicit coordination between humans and robots requires humans to be able to quickly and robustly predict the robot's intentions, i.e. the robot should demonstrate a behavior that is legible. Whereas previous work has sought to explicitly optimize the legibility of behavior, we investigate legibility as a property that arises automatically from general requirements on the efficiency and robustness o… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
26
0

Year Published

2016
2016
2024
2024

Publication Types

Select...
6
2
1

Relationship

2
7

Authors

Journals

citations
Cited by 34 publications
(28 citation statements)
references
References 19 publications
0
26
0
Order By: Relevance
“…Of course, there are many differences between a disembodied virtual agent and the robot used in the present study, which not only had a body but was indeed highly human-like in appearance and in movement. It will therefore be important to probe the relative importance of such physical features as a body, a face, and a human-like appearance, as well as behavioral features such as gaze detection (Sciutti et al, 2015;Palinko et al, 2016), anticipatory gaze , a human-like movement profile (Sciutti et al, 2013), and the capacity to adapt movements to increase their legibility for a human partner (Dragan et al, 2013;Stulp et al, 2015). With respect to the notion of legibility, for example, the present study motivates the hypothesis that a robot's willingness to choose an action that is not optimal for itself (e.g., in terms of energy), but which maximizes legibility for a human partner, may be perceived as effortful and thereby boost a human partner's commitment to the interaction.…”
Section: Discussionmentioning
confidence: 99%
“…Of course, there are many differences between a disembodied virtual agent and the robot used in the present study, which not only had a body but was indeed highly human-like in appearance and in movement. It will therefore be important to probe the relative importance of such physical features as a body, a face, and a human-like appearance, as well as behavioral features such as gaze detection (Sciutti et al, 2015;Palinko et al, 2016), anticipatory gaze , a human-like movement profile (Sciutti et al, 2013), and the capacity to adapt movements to increase their legibility for a human partner (Dragan et al, 2013;Stulp et al, 2015). With respect to the notion of legibility, for example, the present study motivates the hypothesis that a robot's willingness to choose an action that is not optimal for itself (e.g., in terms of energy), but which maximizes legibility for a human partner, may be perceived as effortful and thereby boost a human partner's commitment to the interaction.…”
Section: Discussionmentioning
confidence: 99%
“…These properties are fundamental for the robot to optimize the collaboration introducing other features such as legibility (i.e., the property of generating legible motions that can be easily understood by the human partners) and anticipation (i.e., the property of predicting the human intention, the goal of the collaboration, and optimizing the robot control policy to take into account the human action, reducing the human effort or improving some shared performance criteria) [Stulp et al, 2015,Dragan et al, 2013. Communication of intention in human-human physical interaction increases with mutual pratice, it is related to the arm impedance, to the continuous role adaptation between the partners [Mojtahedi et al, 2017] and it is also related to transparency [Jarrasse et al, 2008]: the same evidence holds for human-robot physical interaction Srinivasa, 2012, Jarrasse et al, 2013].…”
Section: Tactile Feedbackmentioning
confidence: 99%
“…After its completion, the evaluation phase starts, which consists of 10 updates with 5 trials per update for each of the three goals, resulting in 150 trials in total. This number is comparable to Stulp et al (2015) and Busch et al (2017). To evaluate the participants' perception during the experiment, this phase is divided in three blocks with two breaks after the 4 th and 7 th update, respectively, in which the participants are asked to fill a short questionnaire (see results in Figure 4).…”
Section: Evaluation Of the Learning Frameworkmentioning
confidence: 99%