The World Wide Web Conference 2019
DOI: 10.1145/3308558.3314119
|View full text |Cite
|
Sign up to set email alerts
|

XFake: Explainable Fake News Detector with Visualizations

Abstract: In this demo paper, we present the XFake system, an explainable fake news detector that assists end-users to identify news credibility. To effectively detect and interpret the fakeness of news items, we jointly consider both attributes (e.g., speaker) and statements. Specifically, MIMIC, ATTN and PERT frameworks are designed, where MIMIC is built for attribute analysis, ATTN is for statement semantic analysis and PERT is for statement linguistic analysis. Beyond the explanations extracted from the designed fra… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
23
0
2

Year Published

2019
2019
2023
2023

Publication Types

Select...
5
3
1

Relationship

1
8

Authors

Journals

citations
Cited by 65 publications
(27 citation statements)
references
References 8 publications
0
23
0
2
Order By: Relevance
“…It leverages a GCN to learn the patterns of rumor propagation. a CNN+RNN model has been also introduced to detect fake Using user characteristics to create a feature vector, modeling five minutes of tweets into a time interval [27,28,29,30]. Other researchers combined convolutional neural networks (CNN) and long short-term memory (LSTM) to detect rumors based on the relationship between user and textual information [24].…”
Section: B Deep Learning Methodsmentioning
confidence: 99%
“…It leverages a GCN to learn the patterns of rumor propagation. a CNN+RNN model has been also introduced to detect fake Using user characteristics to create a feature vector, modeling five minutes of tweets into a time interval [27,28,29,30]. Other researchers combined convolutional neural networks (CNN) and long short-term memory (LSTM) to detect rumors based on the relationship between user and textual information [24].…”
Section: B Deep Learning Methodsmentioning
confidence: 99%
“…By understanding the outputs achieved from these frameworks, they could derive appropriate explanations for interpreting detection outcomes. [54] .In 2009, the SHAP (for Shapley Additive Explanations) method was analyzed to explain a model by Julio C. S. Reis and André Correia with many others. [57] Most of the explanations were proposed to derive explanation from the perspectives of news contents and user comments [57], for which many researchers used Graphaware Co-Attention Networks [60].…”
Section: Credibility Assessmentmentioning
confidence: 99%
“…57 Because the focus of this paper is on the human-subjects experiment and its findings, we limit the description of our system to an abbreviated summary, though we aim to provide enough details that a skilled machine-learning system designer could develop a similar system. For a more detailed description of the machine-learning aspects of the system, we refer the reader to the following demo paper 58 that describes the XFake architecture and technical details in full.…”
Section: News Statement Data and Classifiermentioning
confidence: 99%