2020
DOI: 10.1613/jair.1.11694
|View full text |Cite
|
Sign up to set email alerts
|

Point at the Triple: Generation of Text Summaries from Knowledge Base Triples

Abstract: We investigate the problem of generating natural language summaries from knowledge base triples. Our approach is based on a pointer-generator network, which, in addition to generating regular words from a fixed target vocabulary, is able to verbalise triples in several ways. We undertake an automatic and a human evaluation on single and open-domain summaries generation tasks. Both show that our approach significantly outperforms other data-driven baselines.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
8
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
5
2

Relationship

2
5

Authors

Journals

citations
Cited by 9 publications
(10 citation statements)
references
References 44 publications
0
8
0
Order By: Relevance
“…In all these examples, linguistic information from the knowledge base is used to build a parallel corpus containing triples and equivalent text sentences from Wikipedia, which is then used to train the NLG algorithm. Directly relevant to the model we propose are the proposals by Lebret et al [51], Chisholm et al [11], Liu et al [55], Yeh et al [91] and Vougiouklis et al [84,85], which extend the general encoder-decoder neural network framework from [12,82] to generate short summaries in English. The original of English biographies generation was introduced by Lebret et al [51] who used feed-forward language model with slot-value templates to generate the first sentence of a Wikipedia summary from its corresponding infobox.…”
Section: Text Generationmentioning
confidence: 99%
See 2 more Smart Citations
“…In all these examples, linguistic information from the knowledge base is used to build a parallel corpus containing triples and equivalent text sentences from Wikipedia, which is then used to train the NLG algorithm. Directly relevant to the model we propose are the proposals by Lebret et al [51], Chisholm et al [11], Liu et al [55], Yeh et al [91] and Vougiouklis et al [84,85], which extend the general encoder-decoder neural network framework from [12,82] to generate short summaries in English. The original of English biographies generation was introduced by Lebret et al [51] who used feed-forward language model with slot-value templates to generate the first sentence of a Wikipedia summary from its corresponding infobox.…”
Section: Text Generationmentioning
confidence: 99%
“…The original of English biographies generation was introduced by Lebret et al [51] who used feed-forward language model with slot-value templates to generate the first sentence of a Wikipedia summary from its corresponding infobox. Incremental upgrades of the original architecture on the same task include the introduction of an auto-encoding pipeline based on an attentive encoder-decoder architecture using GRUs [11], a novel double-attention mechanism over the input infoboxes' fields and their values [55], and adaptations of pointergenerator mechanisms [85,91] over the input triples.…”
Section: Text Generationmentioning
confidence: 99%
See 1 more Smart Citation
“…More recent datasets attempt to remedy some of these limitations. Pavlos et al [38,39] propose two large corpora that align Wikidata and DBpedia claims to Wikipedia text. However, they focus on verbalisations of multiple claims at a time, which limits its usefulness for important tasks e.g.…”
Section: Background and Related Workmentioning
confidence: 99%
“…Wikidata, the web's largest collaborative KG, has very few such datasets [39,7], and existing ones rely on distant supervision to prioritise the sheer number of couplings in exchange for coupling tightness. In addition, they disproportionately represent specific entity types from Wikidata, such as people and locations, when Wikidata covers a much wider variety of information.…”
Section: Introductionmentioning
confidence: 99%