Proceedings of the 17th Conference of the European Chapter of the Association for Computational Linguistics 2023
DOI: 10.18653/v1/2023.eacl-main.106
|View full text |Cite
|
Sign up to set email alerts
|

Can Pretrained Language Models (Yet) Reason Deductively?

Zhangdie Yuan,
Songbo Hu,
Ivan Vulić
et al.

Abstract: Acquiring factual knowledge with Pretrained Language Models (PLMs) has attracted increasing attention, showing promising performance in many knowledge-intensive tasks. Their good performance has led the community to believe that the models do possess a modicum of reasoning competence rather than merely memorising the knowledge. In this paper, we conduct a comprehensive evaluation of the learnable deductive (also known as explicit) reasoning capability of PLMs. Through a series of controlled experiments, we pos… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
2
1

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(1 citation statement)
references
References 49 publications
0
1
0
Order By: Relevance
“…Further, one issue which might arise from further specialising a model to a given task/dataset is a well-known phenomenon of catastrophic forgetting: pretrained language models are prone to forgetting previously learnt knowledge or skills when tuned on new data (De Cao et al, 2021;Yuan et al, 2023). To evaluate whether the models would retain their ability to respond faithfully to examples considerably different from ones seen during fine-tuning, we evaluate the models tuned on Faith-Dial on TopiOCQA.…”
Section: Resultsmentioning
confidence: 99%
“…Further, one issue which might arise from further specialising a model to a given task/dataset is a well-known phenomenon of catastrophic forgetting: pretrained language models are prone to forgetting previously learnt knowledge or skills when tuned on new data (De Cao et al, 2021;Yuan et al, 2023). To evaluate whether the models would retain their ability to respond faithfully to examples considerably different from ones seen during fine-tuning, we evaluate the models tuned on Faith-Dial on TopiOCQA.…”
Section: Resultsmentioning
confidence: 99%