Proceedings of the Second Workshop on Fact Extraction and VERification (FEVER) 2019
DOI: 10.18653/v1/d19-6616
|View full text |Cite
|
Sign up to set email alerts
|

Team DOMLIN: Exploiting Evidence Enhancement for the FEVER Shared Task

Abstract: This paper contains our system description for the second Fact Extraction and VERification (FEVER) challenge. We propose a two-staged sentence selection strategy to account for examples in the dataset where evidence is not only conditioned on the claim, but also on previously retrieved evidence. We use a publicly available document retrieval module and have fine-tuned BERT checkpoints for sentence selection and as the entailment classifier. We report a FEVER score of 68.46% on the blind test set.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
29
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
3
2
2

Relationship

0
7

Authors

Journals

citations
Cited by 35 publications
(32 citation statements)
references
References 9 publications
0
29
0
Order By: Relevance
“…Evidence sentence retrieval component in almost all previous works retrieves all the evidences through a single iteration (Yoneda et al, 2018;Hanselowski et al, 2018b;Nie et al, 2019;Chen et al, 2017;Soleimani et al, 2020;Liu et al, 2020). Stammbach and Neumann (2019) uses a multi-hop retrieval strategy through two iterations to retrieve evidence sentences that are conditioned on the retrieval of other evidence sentences. Then, they choose all the top-most relevant evidence sentences with the highest relevance scores and combine them.…”
Section: Related Workmentioning
confidence: 99%
See 3 more Smart Citations
“…Evidence sentence retrieval component in almost all previous works retrieves all the evidences through a single iteration (Yoneda et al, 2018;Hanselowski et al, 2018b;Nie et al, 2019;Chen et al, 2017;Soleimani et al, 2020;Liu et al, 2020). Stammbach and Neumann (2019) uses a multi-hop retrieval strategy through two iterations to retrieve evidence sentences that are conditioned on the retrieval of other evidence sentences. Then, they choose all the top-most relevant evidence sentences with the highest relevance scores and combine them.…”
Section: Related Workmentioning
confidence: 99%
“…In claim verification component, Nie et al (2019); Yoneda et al (2018); Hanselowski et al (2018b) use a modified ESIM model (Chen et al, 2017) for verification. Recent works (Soleimani et al, 2020;Zhou et al, 2019;Stammbach and Neumann, 2019) use BERT model (Devlin et al, 2019) for claim verification. Few other works (Zhou et al, 2019;Liu et al, 2020) use graph based models for fine-grained semantic reasoning.…”
Section: Related Workmentioning
confidence: 99%
See 2 more Smart Citations
“…Team DOMLIN (Stammbach and Neumann, 2019) used the document retrieval module of Hanselowski et al (2018) and a BERT model for two-staged sentence selection based on the work by (Nie et al, 2019). They also use a BERT-based model for the NLI stage.…”
Section: Builders Phasementioning
confidence: 99%