Proceedings of the Fifteenth Workshop on Graph-Based Methods for Natural Language Processing (TextGraphs-15) 2021
DOI: 10.18653/v1/11.textgraphs-1.2
|View full text |Cite
|
Sign up to set email alerts
|

Modeling Graph Structure via Relative Position for Text Generation from Knowledge Graphs

Abstract: We present a novel encoder-decoder architecture for graph-to-text generation based on Transformer, called the Graformer. With our novel graph self-attention, every node in the input graph is taken into account for the encoding of every other node -not only direct neighbors, facilitating the detection of global patterns. For this, the relation between any two nodes is characterized by the length of the shortest path between them, including the special case when there is no such path. The Graformer learns to wei… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
10
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
4
3
1

Relationship

1
7

Authors

Journals

citations
Cited by 13 publications
(10 citation statements)
references
References 24 publications
0
10
0
Order By: Relevance
“…Recently, graph neural networks (GNNs) have been widely explored for modeling graph structures. Approaches [1379,1380,1381,1382,1383,1384,1385,1386] leverage GNNs and variants to directly encode graph structures. Another line of research [1387,1385,1388,1389] inject the structure information into a sequence-based model, e.g., Transformer.…”
Section: Data-to-text Generationmentioning
confidence: 99%
See 1 more Smart Citation
“…Recently, graph neural networks (GNNs) have been widely explored for modeling graph structures. Approaches [1379,1380,1381,1382,1383,1384,1385,1386] leverage GNNs and variants to directly encode graph structures. Another line of research [1387,1385,1388,1389] inject the structure information into a sequence-based model, e.g., Transformer.…”
Section: Data-to-text Generationmentioning
confidence: 99%
“…Approaches [1379,1380,1381,1382,1383,1384,1385,1386] leverage GNNs and variants to directly encode graph structures. Another line of research [1387,1385,1388,1389] inject the structure information into a sequence-based model, e.g., Transformer. Same as the table-to-text task, pre-training methods also boost the graph-to-text generation.…”
Section: Data-to-text Generationmentioning
confidence: 99%
“…Graph Neural Networks (GNNs) (Veličković et al, 2018) have shown to be effective at encoding graph data. For the KG-to-text task, recent works have leveraged GNNs to encode a graph's neighborhood information (Koncel-Kedziorski et al, 2019;Marcheggiani and Perez-Beltrachini, 2018;Ribeiro et al, 2020b;Schmitt et al, 2021;Guo et al, 2019;Jin et al, 2020) before decoding its corresponding textual representation. Other work instead choose a more global approach and base their encoder on a Transformer-based architecture (Vaswani et al, 2017), calculating self-attention from all the nodes in a graph (Zhu et al, 2019;Cai and Lam, 2020;Ke et al, 2021).…”
Section: Kg-to-text With Graph Transformersmentioning
confidence: 99%
“…It is noteworthy, that achieve their best performance by combining both variants of SPR with sequential position information and that SPR as sole sentence representation, i.e., without additional sequential information, leads to a large drop in performance. (Schmitt et al, 2021), showing their definition of relative position encodings in a graph based on the lengths of shortest paths. ∞ means that there is no path between two nodes.…”
Section: Hierarchies (Trees)mentioning
confidence: 99%
“…In contrast to the other approaches, Graformer explicitly models disconnected graphs (∞) and does not add any sequential position information. Unfortunately, Schmitt et al (2021) do not evaluate Graformer on the same tasks as the other discussed approaches, which makes a performance comparison difficult. Decoder: The decoder is also composed of a stack of N = 6 identical layers.…”
Section: Arbitrary Graphsmentioning
confidence: 99%