2016
DOI: 10.1177/1541931213601046
|View full text |Cite
|
Sign up to set email alerts
|

Trust in Automated Agents is Modulated by the Combined Influence of Agent and Task Type

Abstract: Trust in automation is an important topic in the field of human factors and has a substantial impact on both attitudes towards and performance with automated systems. One variable that has been shown to influence trust is the degree of human-likeness that is displayed by the automated system with the main finding being that increased human-like appearance leads to increased ratings of trust. In the current study, we investigate whether humanness unanimously leads to higher trust or whether the degree to which … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
22
0

Year Published

2017
2017
2023
2023

Publication Types

Select...
4
2

Relationship

1
5

Authors

Journals

citations
Cited by 17 publications
(28 citation statements)
references
References 23 publications
0
22
0
Order By: Relevance
“…the tendency to overly rely on initial information) which has been shown to reduce trust in received advice during decision-making and might explain why advisor type did not modulate post-decision trust (Goodyear et al, 2016;Madhavan & Wiegmann, 2005). In line with this hypothesis, a previous study conducted by Smith et al (2016) involving the same agents showed that when human, robot, and computer advisors were randomly assigned to either the social or analytical task, trust in the agents' advice showed the expected advisor by task type interaction, with stronger compliance with the human advisor on the social task and the machine agents on the analytical task. This indicates that allowing individuals to select their preferred advisor before receiving specific advice from them produces a different trust outcome than when agents are assigned without participant control.…”
Section: Discussionmentioning
confidence: 77%
See 3 more Smart Citations
“…the tendency to overly rely on initial information) which has been shown to reduce trust in received advice during decision-making and might explain why advisor type did not modulate post-decision trust (Goodyear et al, 2016;Madhavan & Wiegmann, 2005). In line with this hypothesis, a previous study conducted by Smith et al (2016) involving the same agents showed that when human, robot, and computer advisors were randomly assigned to either the social or analytical task, trust in the agents' advice showed the expected advisor by task type interaction, with stronger compliance with the human advisor on the social task and the machine agents on the analytical task. This indicates that allowing individuals to select their preferred advisor before receiving specific advice from them produces a different trust outcome than when agents are assigned without participant control.…”
Section: Discussionmentioning
confidence: 77%
“…Lost for Words; Golan et al, 2006). The analytical task was adapted from Smith, Allaham & Wiese (2016) and required participants to solve standard addition and subtraction operations.…”
Section: Taskmentioning
confidence: 99%
See 2 more Smart Citations
“…The lack of oxytocin differences for the human agent may follow in this tradition, as overall trust levels were lower for human agents. Indeed, a previous study using the same pattern-matching task found that participants held a strong bias against collaborating with the human (de Visser et al, 2016), an effect that seems to be driven by initially lower expectations regarding the human agent’s ability to complete the pattern-matching task accurately (Smith, Allaham, & Wiese, 2016). Lower levels of trust and compliance observed in the current study suggest that the human agent was again seen as untrustworthy: administering oxytocin may not be able to override such a bias consistent with previous research (Mikolajczak et al, 2010).…”
Section: Discussionmentioning
confidence: 98%