Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP) 2020
DOI: 10.18653/v1/2020.emnlp-main.90
|View full text |Cite
|
Sign up to set email alerts
|

ENT-DESC: Entity Description Generation by Exploring Knowledge Graph

Abstract: Previous works on knowledge-to-text generation take as input a few RDF triples or keyvalue pairs conveying the knowledge of some entities to generate a natural language description. Existing datasets, such as WIKIBIO, WebNLG, and E2E, basically have a good alignment between an input triple/pair set and its output text. However, in practice, the input knowledge could be more than enough, since the output description may only cover the most significant knowledge. In this paper, we introduce a large-scale and cha… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
12
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
4
1
1

Relationship

2
4

Authors

Journals

citations
Cited by 10 publications
(14 citation statements)
references
References 19 publications
0
12
0
Order By: Relevance
“…Existing Entity Description Generation Task and Dataset Previous works (Novikova et al, 2017;Cheng et al, 2020;Trisedya et al, 2020) mainly take as input some structured data such as knowledge graphs to generate entity descriptions. However, knowledge graphs, often mined from text corpora, are overwhelmingly incomplete on real-world entities and may not be updated in real-time (Dong et al, 2014).…”
Section: Related Workmentioning
confidence: 99%
“…Existing Entity Description Generation Task and Dataset Previous works (Novikova et al, 2017;Cheng et al, 2020;Trisedya et al, 2020) mainly take as input some structured data such as knowledge graphs to generate entity descriptions. However, knowledge graphs, often mined from text corpora, are overwhelmingly incomplete on real-world entities and may not be updated in real-time (Dong et al, 2014).…”
Section: Related Workmentioning
confidence: 99%
“…The Abstract Meaning Representation (AMR) represents the semantic information of each sentence using a rooted directed graph, where each edge is a semantic relations and each node is a concept (Song et al, 2018;Zhu et al, 2019;Mager et al, 2020;Wang et al, 2020c). Knowledge graph to text generation has advanced tasks such as entity description generation and medical image report by generating text from a subgraph in the knowledge graph (Cheng et al, 2020;. Despite all considering graph structures, our method generates one sentence for each node on a large directed acyclic graph, whereas AMR-to-text and knowledge graph to text generate sentences for a subgraph or the entire graph.…”
Section: Relate Workmentioning
confidence: 99%
“…Neural text generation (Bowman et al, 2016;Vaswani et al, 2017;Sutskever et al, 2014;Song et al, 2020b) could be a plausible solution to this problem by generating definition text based on the terminology text. Encouraging results by neural text generation have been observed on related tasks, such as paraphrase generation (Li et al, 2020), description generation (Cheng et al, 2020), synonym generation (Gupta et al, 2015) and data augmentation (Malandrakis et al, 2019). However, it remains unclear how to generate definition, which comprises concise text in the input space (i.e., terminology) and longer text in the output space (i.e., definition).…”
Section: Introductionmentioning
confidence: 99%
“…The two models are trained alternatively with dual learning. Although Cheng et al (2020) proposed the ENT-DESC task aiming at generating better text description for a few entities by exploring the knowledge from KB, their focus is more on distilling the useful part from the input knowledge.…”
Section: Related Workmentioning
confidence: 99%