Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP) 2020
DOI: 10.18653/v1/2020.emnlp-main.609
|View full text |Cite
|
Sign up to set email alerts
|

Fact or Fiction: Verifying Scientific Claims

Abstract: We introduce scientific claim verification, a new task to select abstracts from the research literature containing evidence that SUP-PORTS or REFUTES a given scientific claim, and to identify rationales justifying each decision. To study this task, we construct SCI-FACT, a dataset of 1.4K expert-written scientific claims paired with evidence-containing abstracts annotated with labels and rationales. We develop baseline models for SCIFACT, and demonstrate that simple domain adaptation techniques substantially i… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

1
208
2

Year Published

2020
2020
2023
2023

Publication Types

Select...
5
2
1
1

Relationship

1
8

Authors

Journals

citations
Cited by 165 publications
(211 citation statements)
references
References 33 publications
1
208
2
Order By: Relevance
“…One such example is the task of resolving references to concepts across scientific papers and related news articles. This can help scientists understand how their work is being presented to the public by mainstream media or facilitate fact checking of journalists' work (Wadden et al, 2020). A chatbot or recommender that is able to resolve references to current affairs in both news articles and user input could be more effective at suggesting topics that interest the user.…”
Section: Introductionmentioning
confidence: 99%
“…One such example is the task of resolving references to concepts across scientific papers and related news articles. This can help scientists understand how their work is being presented to the public by mainstream media or facilitate fact checking of journalists' work (Wadden et al, 2020). A chatbot or recommender that is able to resolve references to current affairs in both news articles and user input could be more effective at suggesting topics that interest the user.…”
Section: Introductionmentioning
confidence: 99%
“…Constructing a new dataset that properly frames a new task is a common theme in Deep Learning. Wadden et al [71] construct the SciFact dataset to extend the ideas of FEVER to COVID-19 applications. This is different from classification task formulations previously mentioned in that they are more creative in task design.…”
Section: Misinformation Detectionmentioning
confidence: 99%
“…SciFact and FEVER introduce new datasets that show how supervised learning can tackle Misinformation Detection. Wadden et al [71] design the SciFact dataset to not only classify a claim as true or false, but to provide supporting and refuting evidence as well. Wadden et al [71] deploy a clever annotation scheme of using "citances", sentences in scientific papers that cite another paper, as examples of supporting or refuting evidence for a claim.…”
Section: Misinformation Detectionmentioning
confidence: 99%
“…For claim verification, a system identifies papers containing evidence that supports or refutes a claim provided in a query. SciFact [ 97 ] ( Row S22 ) is an example of such a system. For clinical diagnostic support, a system aims to assist healthcare providers in clinical practice, e.g.…”
Section: Text Mining Systemsmentioning
confidence: 99%