2018
DOI: 10.1007/s12193-018-0287-x
|View full text |Cite
|
Sign up to set email alerts
|

Training the use of theory of mind using artificial agents

Abstract: When engaging in social interaction, people rely on their ability to reason about unobservable mental content of others, which includes goals, intentions, and beliefs. This so-called theory of mind ability allows them to more easily understand, predict, and influence the behavior of others. People even use their theory of mind to reason about the theory of mind of others, which allows them to understand sentences like 'Alice believes that Bob does not know about the surprise party'. But while the use of higher… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
6
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
4
3

Relationship

0
7

Authors

Journals

citations
Cited by 13 publications
(6 citation statements)
references
References 27 publications
(46 reference statements)
0
6
0
Order By: Relevance
“…With socialization, people develop the capacity to have a higher-order theory of mind. This is why when people predictably know artificial agents' level of theory of mind in a strategic game, they tend to increase their own theory of mind reasoning and hence outperform agents [14].…”
Section: Theory Of Mindmentioning
confidence: 99%
See 1 more Smart Citation
“…With socialization, people develop the capacity to have a higher-order theory of mind. This is why when people predictably know artificial agents' level of theory of mind in a strategic game, they tend to increase their own theory of mind reasoning and hence outperform agents [14].…”
Section: Theory Of Mindmentioning
confidence: 99%
“…However, people still do have preconceptions about agents' lack of human-like mind in many negotiation scenarios. People apply their higher order theory of mind reasoning when competing with predictable agents and end up with higher scores when the aim is the win [14]. Specifically, a human opponent is granted agency by default, but a machine's agency can be independent of or dependent on a human actor; the belief about the agent (autonomous vs. human-controlled agent) can result in different tactics adopted by human players [45,51].…”
Section: Negotiationsmentioning
confidence: 99%
“…Virtual agents may vary across applications in terms of capacity (a lifeless icon vs. a living identity) (Subagdja & Tan, 2019), human‐like appearances (having faces and major human‐body parts vs. none) (de Borst & de Gelder, 2015), gender‐based features(Lee, Nass, & Bailenson, 2014), mental states (Pantelis et al, 2014), emotions (de Borst & de Gelder, 2015), personality (Hanna & Richards, 2016), intended and perceived levels of intelligence (how much the agent is designed or perceived to do) (Veltman, de Weerd, & Verbrugge, 2019), and transparencies (running in the background vs. appearing on the application) (Szafir, 2019). Studies have investigated the effect of virtual agent characteristics and representations on people's perceptions of them and on the utilization of web‐based applications (Georgeff, Pell, Pollack, Tambe, & Wooldridge, 1999).…”
Section: Emerging Technologies and Extensions Of Cmc Theoriesmentioning
confidence: 99%
“…This aspect is highly related to the perceived level of intelligence and capacity (agency) this agent is equipped with (Nowak & Fox, 2018; Stein & Ohler, 2017). Knowing what the agent can and cannot do is expected to mediate how the user plans to use the application and interact with the agent and further determine continued use of the application (Veltman et al, 2019). Through iterative interactions, a user may discover the agent's capacities and limitations as well as the application's ranges of functions and limitations.…”
Section: New Framework For Cmc and Human–agent Interactionmentioning
confidence: 99%
“…Veltman, de Weerd, and Verbrugge in their paper on Training the use of theory of mind using artificial agents argue that when interacting with virtual agents, high fidelity of the training situation at hand may have adverse effects, e.g., graphics may distract users or raise undesired expectations of the virtual agents' affordances, and make it increasingly difficult to attribute found effects to their actual causes. Veltman et al [7] work looks into a low-fidelity simulation which is aimed at training users' higher-order Theory of Mind (ToM) thinking. Through a game in which players need to reason about each other's thoughts, this higher order thinking is beneficial to winning.…”
mentioning
confidence: 99%