2020
DOI: 10.1609/aaai.v34i03.5684
|View full text |Cite
|
Sign up to set email alerts
|

Commonsense Knowledge Base Completion with Structural and Semantic Context

Abstract: Automatic KB completion for commonsense knowledge graphs (e.g., ATOMIC and ConceptNet) poses unique challenges compared to the much studied conventional knowledge bases (e.g., Freebase). Commonsense knowledge graphs use free-form text to represent nodes, resulting in orders of magnitude more nodes compared to conventional KBs ( ∼18x more nodes in ATOMIC compared to Freebase (FB15K-237)). Importantly, this implies significantly sparser graph structures — a major challenge for existing KB completion methods that… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

1
112
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
4
2
1

Relationship

0
7

Authors

Journals

citations
Cited by 95 publications
(114 citation statements)
references
References 0 publications
1
112
0
Order By: Relevance
“…In this work, we focus on the English vocabulary which contains approximately 1.5 million nodes. To avoid the step of the query construction and take full advantage of the large scale KG, we exploit ConceptNet embedding proposed in (Malaviya et al, 2020) and generate the KG representation k ∈ R n T ×d k .…”
Section: Input Representationsmentioning
confidence: 99%
See 1 more Smart Citation
“…In this work, we focus on the English vocabulary which contains approximately 1.5 million nodes. To avoid the step of the query construction and take full advantage of the large scale KG, we exploit ConceptNet embedding proposed in (Malaviya et al, 2020) and generate the KG representation k ∈ R n T ×d k .…”
Section: Input Representationsmentioning
confidence: 99%
“…Knowledge graph embedding: During our experiments, we explored different node embeddings for ConceptNet (e.g. GloVe (Pennington et al, 2014), NumberBatch (Speer et al, 2016), and (Malaviya et al, 2020)). We found that the embedding generated by (Malaviya et al, 2020) works best in our model.…”
Section: Implementation Detailsmentioning
confidence: 99%
“…This allowed us to first explore whether the proposed method worked at all before evaluating a semi-supervised approach by creating a weak supervision from a mapping between ConceptNet triples and the original OMCS sentences. While KBs are often dense with short named entity descriptions for nodes, many nodes for commonsense KBs are parts of sentences, making them inherently sparse which impacts their performance as empirically studied by Malaviya et al (2020).…”
Section: Introductionmentioning
confidence: 99%
“…For path generation, we rely on a conventional KB completion task where the goal is to maximize the validity score of a tail entity e t given the pair (e h , r). For example, Malaviya et al (2020) addresses the challenges unique to commonsense KB completion due to sparsity and large numbers of nodes resulting from encoding commonsense facts. However, KB completion does not always equate generation of edges, with the exception of COMET from Bosselut et al (2019) that generates tail node e t given the pair (e h , r).…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation