Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Confer 2021
DOI: 10.18653/v1/2021.acl-short.12
|View full text |Cite
|
Sign up to set email alerts
|

Automatic Fake News Detection: Are Models Learning to Reason?

Abstract: Most fact checking models for automatic fake news detection are based on reasoning: given a claim with associated evidence, the models aim to estimate the claim veracity based on the supporting or refuting content within the evidence. When these models perform well, it is generally assumed to be due to the models having learned to reason over the evidence with regards to the claim. In this paper, we investigate this assumption of reasoning, by exploring the relationship and importance of both claim and evidenc… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
10
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
1
1

Relationship

0
6

Authors

Journals

citations
Cited by 7 publications
(10 citation statements)
references
References 21 publications
0
10
0
Order By: Relevance
“…Current fake news detection models that use a claim's search engine results as evidence may unintentionally use hidden signals that are not attributed to the claim (Hansen et al, 2021). Additionally, models may in fact simply memorize biases within data (Gururangan et al, 2018).…”
Section: Github Repository Link 2 Related Workmentioning
confidence: 99%
See 4 more Smart Citations
“…Current fake news detection models that use a claim's search engine results as evidence may unintentionally use hidden signals that are not attributed to the claim (Hansen et al, 2021). Additionally, models may in fact simply memorize biases within data (Gururangan et al, 2018).…”
Section: Github Repository Link 2 Related Workmentioning
confidence: 99%
“…A framework to both categorize fake news and to identify features that differentiate fake news from real news has been described by Molina et al (2021), and debiasing inappropriate subjectivity in text can be accomplished by replacing a single biased word in each sentence (Pryzant et al, 2020). Using the claim as a query, the top ten results from Google News ("snippets") constitute the evidence (Hansen et al, 2021). PolitiFact and Snopes use five labels (False, Mostly False, Mixture, Mostly True, True), which we collapse to True, Mixture, and False.…”
Section: Github Repository Link 2 Related Workmentioning
confidence: 99%
See 3 more Smart Citations