Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing 2021
DOI: 10.18653/v1/2021.emnlp-main.81
|View full text |Cite
|
Sign up to set email alerts
|

Relational World Knowledge Representation in Contextual Language Models: A Review

Abstract: Relational knowledge bases (KBs) are commonly used to represent world knowledge in machines. However, while advantageous for their high degree of precision and interpretability, KBs are usually organized according to manually-defined schemas, which limit their expressiveness and require significant human efforts to engineer and maintain. In this review, we take a natural language processing perspective to these limitations, examining how they may be addressed in part by training deep contextual language models… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

1
10
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
7
2

Relationship

0
9

Authors

Journals

citations
Cited by 23 publications
(18 citation statements)
references
References 75 publications
1
10
0
Order By: Relevance
“…5 that the CM method had the highest performance, but surprisingly, OM performed quite well. This highlights the ability of LMs to memorize the facts and act as soft KBs (Petroni et al, 2019;Safavi and Koutra, 2021). This trend is also consistent with general-domain .…”
Section: Discussionsupporting
confidence: 70%
“…5 that the CM method had the highest performance, but surprisingly, OM performed quite well. This highlights the ability of LMs to memorize the facts and act as soft KBs (Petroni et al, 2019;Safavi and Koutra, 2021). This trend is also consistent with general-domain .…”
Section: Discussionsupporting
confidence: 70%
“…Contemporary work has highlighted the promise of PLMs on high-level tasks requiring-among other things-access to proper relational knowledge between concepts (see Safavi and Koutra, 2021;Piantadosi and Hill, 2022). Findings from our experiments that target reasoning ability based on perhaps the most well-established of relationsthe ISA relation (Murphy, 2003)-suggest that PLMs' approximation of inference-making behavior based on simple relational knowledge is at best noisy, owing to clear failures in presence of distracting information.…”
Section: General Discussion and Conclusionmentioning
confidence: 99%
“…LMs pretrained on a large corpus of web data have been shown to contain different kinds of knowledge implicitly in their parameters without the need for any human supervision. This includes: world knowledge (Petroni et al, 2019;Rogers et al, 2020), relational knowledge (Safavi and Koutra, 2021), commonsense knowledge (Da et al, 2021), linguistic knowledge (Peters et al, 2018;Goldberg, 2019;Tenney et al, 2019b), actionable knowledge * Equal Contribution † Equal Supervision 1 For an updated paper-list please check our website: https://bkhmsi.github.io/lms-as-kbs/.…”
Section: Introductionmentioning
confidence: 99%
“…For instance, Wei et al (2021) evaluate knowledgeenhanced pretrained LMs by delineating the types of knowledge that can be integrated into existing LMs. Safavi and Koutra (2021) divide relevant work according to the level of supervision provided to the LM by a KB. Similarly, Colon-Hernandez et al (2021) cover the integration of structural knowledge into LMs but forgo implicit knowledge.…”
Section: Introductionmentioning
confidence: 99%