2021
DOI: 10.1609/icwsm.v15i1.18072
|View full text |Cite
|
Sign up to set email alerts
|

Machine Learning Explanations to Prevent Overtrust in Fake News Detection

Abstract: Combating fake news and misinformation propagation is a challenging task in the post-truth era. News feed and search algorithms could potentially lead to unintentional large-scale propagation of false and fabricated information with users being exposed to algorithmically selected false content. Our research investigates the effects of an Explainable AI assistant embedded in news review platforms for combating the propagation of fake news. We design a news reviewing and sharing interface, create a dataset of ne… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

1
8
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
4
2
1

Relationship

0
7

Authors

Journals

citations
Cited by 17 publications
(11 citation statements)
references
References 22 publications
1
8
0
Order By: Relevance
“…They found that using rationales is not significantly better than showing only model confidence, and generally found the effectiveness of explanations to vary across architectures. Our results align better with Horne et al (2019) than with Mohseni et al (2021), but provide a more nuanced picture and for a population of end users, who are trained to do reliability assessment, and who have a real need to perform this activity on a day-to-day basis.…”
Section: Related Worksupporting
confidence: 58%
See 2 more Smart Citations
“…They found that using rationales is not significantly better than showing only model confidence, and generally found the effectiveness of explanations to vary across architectures. Our results align better with Horne et al (2019) than with Mohseni et al (2021), but provide a more nuanced picture and for a population of end users, who are trained to do reliability assessment, and who have a real need to perform this activity on a day-to-day basis.…”
Section: Related Worksupporting
confidence: 58%
“…In sum, their work leaves open a) whether interpretability methods are useful when applied to deep neural models, b) whether interpretability methods can be useful to experts, and c) whether interpretability methods can be useful in situations where interpretability is critical. While Horne et al (2019) and Mohseni et al (2021) addressed a) and b), our work provides partial answers to all these three questions.…”
Section: Related Workmentioning
confidence: 93%
See 1 more Smart Citation
“…Accuracy Most fact-checking user studies assume task accuracy as the primary user goal (Nguyen et al, 2018a;Mohseni, Yang, Pentyala, Du, Liu, Lupfer, Hu, Ji and Ragan, 2021). Whereas non-expert users (i.e., social media users or other form of content consumers) might be most interested in the veracity outcome along with justification, factcheckers often want to use automation and manual effort interchangeably in their workflow (Arnold, 2020;Nakov et al, 2021a).…”
Section: Metricsmentioning
confidence: 99%
“…In this domain, factcheckers and journalists may have less trust in algorithmic tools (Arnold, 2020). On the other hand, there is also the risk of over-trust, or users blindly following model predictions (Nguyen et al, 2018a;Mohseni et al, 2021). To maximize the tool effectiveness, we would want users to neither dismiss all model predictions out of hand (complete skepticism) nor blindly follow all model predictions (complete faith).…”
Section: Metricsmentioning
confidence: 99%