Our system is currently under heavy load due to increased usage. We're actively working on upgrades to improve performance. Thank you for your patience.
Findings of the Association for Computational Linguistics: EMNLP 2020 2020
DOI: 10.18653/v1/2020.findings-emnlp.207
|View full text |Cite
|
Sign up to set email alerts
|

BERT-MK: Integrating Graph Contextualized Knowledge into Pre-trained Language Models

Abstract: Complex node interactions are common in knowledge graphs (KGs), and these interactions can be considered as contextualized knowledge exists in the topological structure of KGs. Traditional knowledge representation learning (KRL) methods usually treat a single triple as a training unit, neglecting the usage of graph contextualized knowledge. To utilize these unexploited graph-level knowledge, we propose an approach to model subgraphs in a medical KG. Then, the learned knowledge is integrated with a pre-trained … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
51
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
5
3

Relationship

0
8

Authors

Journals

citations
Cited by 73 publications
(64 citation statements)
references
References 25 publications
(23 reference statements)
0
51
0
Order By: Relevance
“…The model is trained to distinguish the correct entity mention from randomly chosen ones. BERT-MK (He et al, 2020) integrates fact triples from knowledge graph. For each entity, it sample incoming and outcoming instances from the neighbors on the knowledge graph, and replaces head or tail entity to create negative instances.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…The model is trained to distinguish the correct entity mention from randomly chosen ones. BERT-MK (He et al, 2020) integrates fact triples from knowledge graph. For each entity, it sample incoming and outcoming instances from the neighbors on the knowledge graph, and replaces head or tail entity to create negative instances.…”
Section: Related Workmentioning
confidence: 99%
“…Recently, some efforts have been made to exploit injecting knowledge into pre-trained language models (Zhang et al, 2019;Lauscher et al, 2019;Levine et al, 2020;Peters et al, 2019;He et al, 2020;Xiong et al, 2020). Most previous works (as shown in Table 1) augment the standard language modeling objective with knowledge-driven objectives and update the entire model parameters.…”
Section: Introductionmentioning
confidence: 99%
“…A brief comparison can be found in Table 1. CoLAKE is conceptually similar to K-BERT (Liu et al, 2020) and BERT-MK (He et al, 2019). CoLAKE differs from K-BERT in that, instead of injecting triplets during fine-tuning, CoLAKE jointly learns embeddings for entities and relations during pretraining LMs.…”
Section: Related Workmentioning
confidence: 99%
“…Lv et al (2020) propose a GNN-based inference model on conceptual network relationships and heterogeneous graphs of Wikipedia sentences. BERT-MK (He et al, 2019) integrates fact triples in the KG, while REALM (Guu et al, 2020) augments language model pre-training algorithms with a learned textual knowledge retriever. Unlike previous works, we incorporate external knowledge implicitly and explicitly.…”
Section: Related Workmentioning
confidence: 99%
“…In this section, we describe the method to extract knowledge facts from knowledge graph in details. Once the knowledge is determined, we can choose the appropriate integration mechanism for further knowledge injection, such as attention mechanism (Sun et al, 2018;Ma et al, 2019), pre-training tasks (He et al, 2019) and multi-task training . Given a question Q and a candidate answer O, we first identify the entity and its type in the text by entity linking.…”
Section: Knowledge Acquisitionmentioning
confidence: 99%