Proceedings of the 11th International Conference on Natural Language Generation 2018
DOI: 10.18653/v1/w18-6501
|View full text |Cite
|
Sign up to set email alerts
|

Deep Graph Convolutional Encoders for Structured Data to Text Generation

Abstract: Most previous work on neural text generation from graph-structured data relies on standard sequence-to-sequence methods. These approaches linearise the input graph to be fed to a recurrent neural network. In this paper, we propose an alternative encoder based on graph convolutional networks that directly exploits the input structure. We report results on two graphto-sequence datasets that empirically show the benefits of explicitly encoding the input graph structure. 1

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
83
0

Year Published

2019
2019
2021
2021

Publication Types

Select...
5
4
1

Relationship

0
10

Authors

Journals

citations
Cited by 106 publications
(88 citation statements)
references
References 27 publications
(34 reference statements)
0
83
0
Order By: Relevance
“…Stacking layers was demonstrated to be effective in graph-to-sequence approaches (Marcheggiani and Perez Beltrachini, 2018;Koncel-Kedziorski et al, 2019;Damonte and Cohen, 2019) and allows us to test for their contributions to the system performance more easily. We employ different GNNs for both graph encoders (Section 3.3).…”
Section: Dual Graph Encodermentioning
confidence: 99%
“…Stacking layers was demonstrated to be effective in graph-to-sequence approaches (Marcheggiani and Perez Beltrachini, 2018;Koncel-Kedziorski et al, 2019;Damonte and Cohen, 2019) and allows us to test for their contributions to the system performance more easily. We employ different GNNs for both graph encoders (Section 3.3).…”
Section: Dual Graph Encodermentioning
confidence: 99%
“…Previous studies work on embedding-based model (Nguyen et al, 2018; and entity alignment model (Chen et al, 2017;Trisedya et al, 2019) to enrich a knowledge base. Following the success of the sequence-to-sequence architecture (Bahdanau et al, 2015) for generating sentences from structured data (Marcheggiani and Perez-Beltrachini, 2018;Trisedya et al, 2018), we employ this architecture to do the opposite, which is extracting triples from a sentence.…”
Section: Introductionmentioning
confidence: 99%
“…Another critical open question in our framework is whether the surface generator will be able to generate surfaces representative enough to allow for generalization to real examples. Current NLG systems are increasingly capable of structured text generation (Marcheggiani and Perez-Beltrachini, 2018), and though they produce relatively short surfaces, we believe that coupling them with the generated action graphs is a promising approach to scaling up to longer sequences while maintaining coherence. Such systems can use sentence-level semantic parses as training data, meaning they can leverage existing weakly-supervised shallow parsing techniques.…”
Section: Discussionmentioning
confidence: 99%