2023
DOI: 10.1016/j.apergo.2022.103858
|View full text |Cite
|
Sign up to set email alerts
|

Differential biases in human-human versus human-robot interactions

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
6
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
6
1
1

Relationship

1
7

Authors

Journals

citations
Cited by 15 publications
(6 citation statements)
references
References 27 publications
0
6
0
Order By: Relevance
“…The research in this mini-track demonstrates robots can fulfill certain social and emotional aspects of interactions with humans. This helps to satisfy the call from recent research that has noted the importance of non-performance-based aspects of trust in robots, which has received little attention [1]. The current mini-track also expands the literature on moderators of the trust relationship.…”
Section: Introductionmentioning
confidence: 66%
“…The research in this mini-track demonstrates robots can fulfill certain social and emotional aspects of interactions with humans. This helps to satisfy the call from recent research that has noted the importance of non-performance-based aspects of trust in robots, which has received little attention [1]. The current mini-track also expands the literature on moderators of the trust relationship.…”
Section: Introductionmentioning
confidence: 66%
“…Human-human relationships are conceived differently from human-automation relationships, where an assessment of trust/distrust seems to be dependent on different factors, see, e.g., ( Jian et al, 2000 ; Alarcon et al, 2023 ; Eicher et al, 2023 ; Zhang et al, 2023 ). Benevolence, for example, is about interpersonal relationships, meaning it might not develop in human-automation relationships the same way it does for human-human relationships ( Centeio Jorge et al, 2021 ).…”
Section: Background and Related Workmentioning
confidence: 99%
“…Trust, however, is not a simple concept. Literature has focused on exploring trust in human-automation teams, particularly looking into the differences between human-human and human-automation trust ( Alarcon et al, 2023 ; Eicher et al, 2023 ; Zhang et al, 2023 ), how this trust can be optimised ( Lee and See, 2004 ; Groom and Nass, 2007 ; Webber, 2008 ; Knocton et al, 2023 ), and which factors reduce trust ( Falcone and Castelfranchi, 2004 ; Madhavan et al, 2006 ; Kopp et al, 2023 ). In particular, automation failure has a significant impact on a person’s trust, i.e., a person that is interacting with the imperfectly reliable has a significantly lower level of trust in it in subsequent interactions ( Robinette et al, 2017 ).…”
Section: Introductionmentioning
confidence: 99%
“…The literature has burgeoned from a seminal review article by Lee and See (2004), which outlines the trust process in human-automation interactions. Lee and See’s theoretical lens has been used to investigate trust and trustworthiness in a variety of subfields in addition to trust in automation (Hoff & Bashir, 2015), such as human-robot (Alarcon et al, 2023) and human-cognitive agent interaction (de Visser et al, 2016). A key aspect of Lee and See’s (2004) model focuses on defining automation trustworthiness, which comprises perceptions of automation performance, purpose, and process.…”
Section: Introductionmentioning
confidence: 99%
“…Whereas Lee and See (2004) discuss perceptions of automation trustworthiness, we refer to perceptions of systems, as their theory has been applied more broadly across subfields. Nearly two decades of research has applied this model to investigate human trustworthiness perceptions of automation (Chancey et al, 2017), robot (Alarcon et al, 2023), and robotic swarm referents (Hamdan et al, 2021). Researchers face several unique challenges when creating a scale to assess trustworthiness perceptions toward systems.…”
Section: Introductionmentioning
confidence: 99%