2021
DOI: 10.48550/arxiv.2105.07698
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Automatic Fake News Detection: Are Models Learning to Reason?

Abstract: Most fact checking models for automatic fake news detection are based on reasoning: given a claim with associated evidence, the models aim to estimate the claim veracity based on the supporting or refuting content within the evidence. When these models perform well, it is generally assumed to be due to the models having learned to reason over the evidence with regards to the claim. In this paper, we investigate this assumption of reasoning, by exploring the relationship and importance of both claim and evidenc… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2022
2022
2022
2022

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(1 citation statement)
references
References 8 publications
0
1
0
Order By: Relevance
“…We will attempts to explore the effective result of heterogeneous structure in Section 5.2. Besides, it's worth noting that, when H ce is removed, model still has a proper result, where it's investigated in previous study (Hansen et al, 2021) and an important problem is highlighted that whether models for automatic fact verification have the ability of reasoning.…”
Section: Ablation Studymentioning
confidence: 99%
“…We will attempts to explore the effective result of heterogeneous structure in Section 5.2. Besides, it's worth noting that, when H ce is removed, model still has a proper result, where it's investigated in previous study (Hansen et al, 2021) and an important problem is highlighted that whether models for automatic fact verification have the ability of reasoning.…”
Section: Ablation Studymentioning
confidence: 99%