2019
DOI: 10.48550/arxiv.1909.06508
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Building Second-Order Mental Models for Human-Robot Interaction

Abstract: The mental models that humans form of other agentsencapsulating human beliefs about agent goals, intentions, capabilities, and more-create an underlying basis for interaction. These mental models have the potential to affect both the human's decision making during the interaction and the human's subjective assessment of the interaction. In this paper, we surveyed existing methods for modeling how humans view robots, then identified a potential method for improving these estimates through inferring a human's mo… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1

Citation Types

0
4
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
3
2

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(4 citation statements)
references
References 17 publications
(25 reference statements)
0
4
0
Order By: Relevance
“…For teamwork of this sort to succeed when complex tasks are at stake, humans and robots might sometimes need the capacity of theory of mind (or second-order "mental" models) to represent each other's epistemic states (knowledge, belief) and pro-attitudes (desires, goals). Theory of mind comes "live" in the human brain at age three to five (Southgate, 2013;Wellman, Cross, & Watson, 2001) and its role in cooperative human-robot interaction has received considerable attention recently (e.g., Brooks & Szafir, 2019;Devin & Alami, 2016;Görür, Rosman, Hoffman, & Albayrak, 2017;Leyzberg, Spaulding, & Scassellati, 2014;Scassellati, 2002;Zhao, Holtzen, Gao, & Zhu, 2015; for a review, see Tabrez, Luebbers, & Hayes, 2020; for implementations in "moral algorithms," see Tolmeijer, Kneer, Sarasua, Christen, & Bernstein, 2020).…”
Section: Introductionmentioning
confidence: 99%
“…For teamwork of this sort to succeed when complex tasks are at stake, humans and robots might sometimes need the capacity of theory of mind (or second-order "mental" models) to represent each other's epistemic states (knowledge, belief) and pro-attitudes (desires, goals). Theory of mind comes "live" in the human brain at age three to five (Southgate, 2013;Wellman, Cross, & Watson, 2001) and its role in cooperative human-robot interaction has received considerable attention recently (e.g., Brooks & Szafir, 2019;Devin & Alami, 2016;Görür, Rosman, Hoffman, & Albayrak, 2017;Leyzberg, Spaulding, & Scassellati, 2014;Scassellati, 2002;Zhao, Holtzen, Gao, & Zhu, 2015; for a review, see Tabrez, Luebbers, & Hayes, 2020; for implementations in "moral algorithms," see Tolmeijer, Kneer, Sarasua, Christen, & Bernstein, 2020).…”
Section: Introductionmentioning
confidence: 99%
“…This underlines the importance of developing systems that can understand the mental model of their users [7,19]. In fact, this strategic behaviour of the user can also leak information about the user's goal which the system could capture to further improve the optimization [6].…”
Section: Discussionmentioning
confidence: 99%
“…For teamwork of this sort to succeed when complex tasks are at stake, humans and robots might sometimes need the capacity of theory of mind (or second-order "mental" models) to represent each other's epistemic states (knowledge, belief) and pro-attitudes (desires, goals). Theory of mind comes "live" in the human brain at age three to five (Southgate, 2013;Wellman, Cross, & Watson, 2001) and its role in cooperative human-robot interaction has received considerable attention recently (e.g., Brooks & Szafir, 2019;Devin & Alami, 2016;Görür, Rosman, Hoffman, & Albayrak, 2017;Leyzberg, Spaulding, & Scassellati, 2014;Scassellati, 2002;Zhao, Holtzen, Gao, & Zhu, 2015; for a review, see Tabrez, Luebbers, & Hayes, 2020; for implementations in "moral algorithms," see Tolmeijer, Kneer, Sarasua, Christen, & Bernstein, 2020).…”
Section: Introductionmentioning
confidence: 99%