Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing 2017
DOI: 10.18653/v1/d17-1317
|View full text |Cite
|
Sign up to set email alerts
|

Truth of Varying Shades: Analyzing Language in Fake News and Political Fact-Checking

Abstract: We present an analytic study on the language of news media in the context of political fact-checking and fake news detection. We compare the language of real news with that of satire, hoaxes, and propaganda to find linguistic characteristics of untrustworthy text. To probe the feasibility of automatic political fact-checking, we also present a case study based on PolitiFact.com using their factuality judgments on a 6-point scale. Experiments show that while media fact-checking remains to be an open research qu… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

5
299
1
11

Year Published

2018
2018
2021
2021

Publication Types

Select...
5
2
1

Relationship

0
8

Authors

Journals

citations
Cited by 561 publications
(327 citation statements)
references
References 21 publications
5
299
1
11
Order By: Relevance
“…We call this value the propaganda index, since it reflects the probability for an article to have a propagandistic intent. We use four families of features: Word n-gram features We use tf.idf -weighted word [1, 3]-grams (Rashkin et al 2017).…”
Section: Propaganda Index Computationmentioning
confidence: 99%
“…We call this value the propaganda index, since it reflects the probability for an article to have a propagandistic intent. We use four families of features: Word n-gram features We use tf.idf -weighted word [1, 3]-grams (Rashkin et al 2017).…”
Section: Propaganda Index Computationmentioning
confidence: 99%
“…The syntax, semantics, and style of the written text can provide significant information about the intention of the authors. It has been widely observed that the language and tone of fake news presentation are more aggressive in general, and it involves a choice of words depicting strong emotions and biases (Rashkin et al, 2017). Our model uses a deep, bidirectional LSTM architecture.…”
Section: Methodsmentioning
confidence: 99%
“…Although the dataset was developed for a similar problem, we made slight modifications to make it more generalizable. For example, we removed all sentences which were labeled as satire as we theorize that satire is more of a linguistic phenomenon (intended for humor) than fake news (Rashkin et al, 2017).…”
Section: Datasetmentioning
confidence: 99%
“…Deep semantics Ma et al 2016Ma et al 2018Wu et al 2018Reis et al 2019Rashkin et al 2017Martin et al 2018Tseng et al 1999Karimi et al 2018Long et al 2017Shu et al 2019 Profiles Influence Interests Yang et al 2018Ghenai et al 2018 Dynamic networks Static networks Jin et al 2016Tacchini et al 2017Ma et al 2017Ruchansky et al 2017Wu et al 2018 Stance-based Ma et al 2018Lukasik et al 2019 Meta-data based Figure 2. The review of information credibility evaluation methods Wu et al, 2019a).…”
Section: Shallow Semanticsmentioning
confidence: 99%