Extended Abstracts of the 2020 CHI Conference on Human Factors in Computing Systems 2020
DOI: 10.1145/3334480.3382842
|View full text |Cite
|
Sign up to set email alerts
|

Effect of Confidence Indicators on Trust in AI-Generated Profiles

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

1
1
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
4
1

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(2 citation statements)
references
References 18 publications
1
1
0
Order By: Relevance
“…These responses suggest that efforts to flag bot content may mitigate their ability to spread misinformation on Twitter. These results align with previous studies finding that people consider identified or suspected bots as less trustworthy than human-generated content (Waddell 2018 ; Bruzzese et al 2020 ; Graefe and Bohlken 2020 ; Jakesch et al 2019 ). Additionally, because “the author is the feature that led the users to the most accurate perceptions” about the veracity of information (Zubiaga and Ji 2014 ), it is not surprising that finding out that Twitter posters are bots reduces user attitudes about the tweet.…”
Section: Discussionsupporting
confidence: 91%
See 1 more Smart Citation
“…These responses suggest that efforts to flag bot content may mitigate their ability to spread misinformation on Twitter. These results align with previous studies finding that people consider identified or suspected bots as less trustworthy than human-generated content (Waddell 2018 ; Bruzzese et al 2020 ; Graefe and Bohlken 2020 ; Jakesch et al 2019 ). Additionally, because “the author is the feature that led the users to the most accurate perceptions” about the veracity of information (Zubiaga and Ji 2014 ), it is not surprising that finding out that Twitter posters are bots reduces user attitudes about the tweet.…”
Section: Discussionsupporting
confidence: 91%
“…Readers did not assign higher credibility scores to human-written vs. bot-written news articles when they did not know who wrote the story, but they considered stories labeled as written by humans more credible and readable (Graefe and Bohlken 2020 ). Adding low-confidence indicators to AI-generated content decreases participant trust, but high-confidence indicators do not increase trust (Bruzzese et al 2020 ). Research of participants who viewed tweets labeled as coming from either a CDC Twitterbot or a human working at the CDC found that “a Twitterbot is perceived as a credible source of information” (Edwards et al 2014 , p. 374).…”
Section: Literature Reviewmentioning
confidence: 99%