2022
DOI: 10.54941/ahfe1001451
|View full text |Cite
|
Sign up to set email alerts
|

Supradyadic Trust in Artificial Intelligence

Abstract: There is a considerable body of research on trust in Artificial Intelligence (AI). Trust has been viewed almost exclusively as a dyadic construct, where it is a function of various factors between the user and the agent, mediated by the context of the environment. A recent study has found several cases of supradyadic trust interactions, where a user’s trust in the AI is affected by how other people interact with the agent, above and beyond endorsements or reputation. An analysis of these surpradyadic interacti… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
6
0

Year Published

2023
2023
2023
2023

Publication Types

Select...
3

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(8 citation statements)
references
References 0 publications
1
6
0
Order By: Relevance
“…We found that trust could be lost for a wide variety of stakeholders (users, developers, deployers, etc.). These findings align somewhat with the concept of supradydadic trust (i.e., trust outside of the user-AI dyad; Dorton, 2022). They also align with recent research on system-wide trust, or the idea that different components within a sociotechnical system (AI, people, work, etc.)…”
Section: Other Insightssupporting
confidence: 86%
See 2 more Smart Citations
“…We found that trust could be lost for a wide variety of stakeholders (users, developers, deployers, etc.). These findings align somewhat with the concept of supradydadic trust (i.e., trust outside of the user-AI dyad; Dorton, 2022). They also align with recent research on system-wide trust, or the idea that different components within a sociotechnical system (AI, people, work, etc.)…”
Section: Other Insightssupporting
confidence: 86%
“…The AI Incident Database allowed us to answer various research questions about how trust in AI is lost “in the wild” across various contexts. To be more precise, we found the AI Incident Database to be a viable means to conduct naturalistic research, although it appears to have advantages and disadvantages when compared to interview-based knowledge elicitation methods such as the critical incident technique (e.g., Dorton & Harper, 2022a; Dorton, 2022):…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…Premortems do not guarantee an exhaustive set of risks will be identified (Bettin et al, 2022 ). Similarly, methods such as analytic games (e.g., de Rosa and De Gloria, 2021 ) and other checklists based on naturalistic inquiry (e.g., Dorton, 2022 ) are bounded in scope by not only the scenarios and injects examined, but also the expertise of the participants involved in their application. More generally, there is a long-standing challenge of developing tools for an envisioned world with technological change (e.g., Woods and Dekker, 2010 ).…”
Section: Discussionmentioning
confidence: 99%
“…Recent work has shown that analytic wargames involving experienced players in naturalistic settings can be used to test disruptive technologies and uncover emergent behaviors (de Rosa and De Gloria, 2021 ), and to explore how the work system may stretch with the introduction of new technologies (Dorton et al, 2020 ). Other approaches have included the use of naturalistic methods such as the critical incident technique to develop evidence-based checklists for AI developers (e.g., Dorton, 2022 ). Further still, new methods based on naturalistic inquiry such as Systematic Contributors and Adaptation Diagramming (SCAD; Jefferies et al, 2022 ) and Joint Activity Monitoring (JAM; Morey et al, 2022 ) have been developed to attempt to identify issues in work systems more proactively.…”
Section: Increasing Understanding and Foresightmentioning
confidence: 99%