Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conferen 2019
DOI: 10.18653/v1/d19-1250
|View full text |Cite
|
Sign up to set email alerts
|

Language Models as Knowledge Bases?

Abstract: Recent progress in pretraining language models on large textual corpora led to a surge of improvements for downstream NLP tasks. Whilst learning linguistic knowledge, these models may also be storing relational knowledge present in the training data, and may be able to answer queries structured as "fillin-the-blank" cloze statements. Language models have many advantages over structured knowledge bases: they require no schema engineering, allow practitioners to query about an open class of relations, are easy t… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

12
1,068
2
1

Year Published

2020
2020
2024
2024

Publication Types

Select...
8
1

Relationship

0
9

Authors

Journals

citations
Cited by 1,259 publications
(1,378 citation statements)
references
References 25 publications
12
1,068
2
1
Order By: Relevance
“…Other examples of hybrid symbolic and sub-symbolic methods where a knowledge-base tool or graph-perspective enhances the neural (e.g., language [308]) model are in [309,310]. In reinforcement learning, very few examples of symbolic (graphical [311] or relational [75,312]) hybrid models exist, while in recommendation systems, for instance, explainable autoencoders are proposed [313].…”
Section: Hybrid Transparent and Black-box Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…Other examples of hybrid symbolic and sub-symbolic methods where a knowledge-base tool or graph-perspective enhances the neural (e.g., language [308]) model are in [309,310]. In reinforcement learning, very few examples of symbolic (graphical [311] or relational [75,312]) hybrid models exist, while in recommendation systems, for instance, explainable autoencoders are proposed [313].…”
Section: Hybrid Transparent and Black-box Methodsmentioning
confidence: 99%
“…Neural-symbolic Systems [297,298,299,300] KB-enhanced Systems [24,169,301,308,309,310] Deep Formulation [264,302,303,304,305] Relational Reasoning [75,312,313,314] Case-base Reasoning [316,317,318] Fig. 11.a) ( Fig.…”
Section: Hybrid Transparent and Black-box Methodsmentioning
confidence: 99%
“…Our work differs in that we consider arbitrary relations (as opposed to the object-property and object-affordance relations) and the fact that we automatically identify the most appropriate trigger sentences for each relation. The problem of extracting relational knowledge from the BERT language model was also studied very recently in Petroni et al (2019). In this work, a wide range of relations is considered, but their approach again depends on manually chosen trigger sentences.…”
Section: Related Workmentioning
confidence: 99%
“…Having defined our probing techniques, we now discuss how to generate the prompts for the recommendation and search probes, along with the templates we used. Based on the knowledge extracted from rating and review datasets, we create prompt sentences in a similar manner to how previous work extracted knowledge from other data sources [26,27].…”
Section: Templates and Prompt Generationmentioning
confidence: 99%