Proceedings of the 13th International Workshop on Semantic Evaluation 2019
DOI: 10.18653/v1/s19-2150
|View full text |Cite
|
Sign up to set email alerts
|

AUTOHOME-ORCA at SemEval-2019 Task 8: Application of BERT for Fact-Checking in Community Forums

Abstract: Fact checking is an important task for maintaining high quality posts and improving user experience in Community Question Answering forums. Therefore, the SemEval-2019 task 8 is aimed to identify factual question (subtask A) and detect true factual information from corresponding answers (subtask B). In order to address this task, we propose a system based on the BERT model with meta information of questions. For the subtask A, the outputs of fine-tuned BERT classification model are combined with the feature of… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

1
7
0

Year Published

2019
2019
2021
2021

Publication Types

Select...
4
1

Relationship

0
5

Authors

Journals

citations
Cited by 7 publications
(8 citation statements)
references
References 18 publications
(17 reference statements)
1
7
0
Order By: Relevance
“…Results for the pretraining experiments (shown in Table 7) show significant improvement of the pretrained model over the models trained only on our corpus. This is similar to findings by Lv et al (2019). However, the improvement is disproportionally larger in the stance prediction task (76.7 vs. 37.8 F 1 ) and the large gains do not carry over to the claim verification task (64.3 vs. 53.1 F 1 ).…”
Section: Resultssupporting
confidence: 86%
See 2 more Smart Citations
“…Results for the pretraining experiments (shown in Table 7) show significant improvement of the pretrained model over the models trained only on our corpus. This is similar to findings by Lv et al (2019). However, the improvement is disproportionally larger in the stance prediction task (76.7 vs. 37.8 F 1 ) and the large gains do not carry over to the claim verification task (64.3 vs. 53.1 F 1 ).…”
Section: Resultssupporting
confidence: 86%
“…Pretrained Transformer: Pretraining and transfer learning (Devlin et al, 2018a; has recently gained attention as a popular approach to acquiring universal linguistic features useful in many downstream NLP tasks and was shown to be successful in improving on the state of the art in many downstream NLP tasks with minimal fine-tuning. Lv et al (2019) have successfully explored BERT for the task of fake news detection in English and proposed an extension that improves on fine-tuned BERT. In addition to the aforementioned supervised methods, we evaluate BERT (Devlin et al, 2018a) on both tasks in our corpus.…”
Section: Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…Pretrained Transformer: Pretraining and transfer learning (Devlin et al, 2018a;Peters et al, 2018;Radford et al, 2019) has recently gained attention as a popular approach to acquiring universal linguistic features useful in many downstream NLP tasks and was shown to be successful in improving on the state of the art in many downstream NLP tasks with minimal fine-tuning. Lv et al (2019) have successfully explored BERT for the task of fake news detection in English and proposed an extension that improves on fine-tuned BERT. In addition to the aforementioned supervised methods, we evaluate BERT (Devlin et al, 2018a) on both tasks in our corpus.…”
Section: Methodsmentioning
confidence: 99%
“…B-Contradict the original title: Looks similar to the original title but has contradicting meaning (both cannot be true in the same context) by reversing meaning without negating main verb, using antonym of main verb with rephrasing, changing key information using world knowledge such as locations, counts and dates. 2017; Wang et al, 2018;Alzanin and Azmi, 2019) including deep learning techniques (Hanselowski et al, 2017;Baly et al, 2018b;Popat et al, 2018;Chawla et al, 2019;Helwe et al, 2019;Lv et al, 2019).…”
Section: Related Workmentioning
confidence: 99%