2022
DOI: 10.1093/jcmc/zmac010
|View full text |Cite
|
Sign up to set email alerts
|

When AI moderates online content: effects of human collaboration and interactive transparency on user trust

Abstract: Given the scale of user-generated content online, the use of artificial intelligence (AI) to flag problematic posts is inevitable, but users do not trust such automated moderation of content. We explore if (a) involving human moderators in the curation process and (b) affording “interactive transparency,” wherein users participate in curation, can promote appropriate reliance on AI. We test this through a 3 (Source: AI, Human, Both) × 3 (Transparency: No Transparency, Transparency-Only, Interactive Transparenc… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

1
23
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
6
1
1

Relationship

0
8

Authors

Journals

citations
Cited by 30 publications
(24 citation statements)
references
References 23 publications
1
23
0
Order By: Relevance
“…In particular, stakeholders and intended end-users are routinely asked to weigh in on input features in domains such as health care [62]. The primary goals of feature engineering are to improve model performance [31,63] and increase transparency and user trust [45]. Our study does not address the first goal but is related to the latter.…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…In particular, stakeholders and intended end-users are routinely asked to weigh in on input features in domains such as health care [62]. The primary goals of feature engineering are to improve model performance [31,63] and increase transparency and user trust [45]. Our study does not address the first goal but is related to the latter.…”
Section: Discussionmentioning
confidence: 99%
“…By engaging a small group of stakeholders at a local food rescue organization through a series of in-person meetings to elicit their beliefs, and implement and evaluate machine learning models based on their beliefs, Lee et al [40] found that participatory design improved participants' perceived procedural fairness, confidence in models' representation of their own beliefs, and distributive outcomes. In studying user interactions with social media, Molina and Sundar [45] find that allowing users to tinker with what keywords to use in the classification of hate speech enhance users' trust and agreement with the algorithm. In our experiments, users are able to express their value judgements through modifications to the model training process.…”
Section: Related Workmentioning
confidence: 99%
“…In addition to the aforementioned positive machine heuristic, online users could also be applying the negative machine heuristic (Sundar, 2020). A negative machine heuristic refers to the general impression that machines are rigid and lack the flexibility to make nuanced subjective judgments like humans (Molina & Sundar, 2022). The distinction between the positive machine heuristic and the negative machine heuristic can help explain the contradictory evidence regarding why users trust machines more than humans in some instances, but trust them less in others.…”
Section: Machine Heuristics Triggered By Chatgptmentioning
confidence: 99%
“…2 However, these AI classifiers are largely unreliable, particularly with short texts and when AI-generated texts are co-edited by humans. Without proper explanation, users may not be fully aware of the limitations of these systems and may rely on the positive machine heuristic, again to trust the AI classifier's judgment regarding the source of information (Molina & Sundar, 2022). To promote critical evaluation of the information, we should provide users with enough explanations that help them understand the strengths and limitations of AI detection tools in addition to those of AI generation tools.…”
Section: Our Prompt: You Got That Wrong the Main Model Is About Techn...mentioning
confidence: 99%
“…Given that users may not trust platforms' automated decision-making on their report, the platform may involve a human moderator in the content curation process. Scientific studies show that allowing users to provide feedback on the algorithm enhances users' trust in the system (Molina et al, 2022).…”
Section: User Report Decision Makingmentioning
confidence: 99%