2019
DOI: 10.1609/aaai.v33i01.33016786
|View full text |Cite
|
Sign up to set email alerts
|

Hierarchical Encoder with Auxiliary Supervision for Neural Table-to-Text Generation: Learning Better Representation for Tables

Abstract: Generating natural language descriptions for the structured tables which consist of multiple attribute-value tuples is a convenient way to help people to understand the tables. Most neural table-to-text models are based on the encoder-decoder framework. However, it is hard for a vanilla encoder to learn the accurate semantic representation of a complex table. The challenges are two-fold: firstly, the table-to-text datasets often contain large number of attributes across different domains, thus it is hard for t… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
38
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
5
2

Relationship

0
7

Authors

Journals

citations
Cited by 32 publications
(38 citation statements)
references
References 23 publications
0
38
0
Order By: Relevance
“…It consists in conditioning the decoder on entity representations that are updated during inference at each decoding step. On the other hand, Liu et al [18,17] rather focus on introducing structure into the encoder. For instance, they propose a dual encoder [17] which encodes separately the sequence of element names and the sequence of element values.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…It consists in conditioning the decoder on entity representations that are updated during inference at each decoding step. On the other hand, Liu et al [18,17] rather focus on introducing structure into the encoder. For instance, they propose a dual encoder [17] which encodes separately the sequence of element names and the sequence of element values.…”
Section: Related Workmentioning
confidence: 99%
“…On the other hand, Liu et al [18,17] rather focus on introducing structure into the encoder. For instance, they propose a dual encoder [17] which encodes separately the sequence of element names and the sequence of element values. These approaches are however designed for single-entity data structures and do not account for delimitation between entities.…”
Section: Related Workmentioning
confidence: 99%
“…In GF, NP, VV, V2, VP, and Cl stand for noun phrase, verb-phrase-complement verb, two-place verb, verb phrase and clause, respectively. Note that although the set of GF grammatical rules can be used to construct a constituency-based parse tree 12 , the reverse direction is not always true. To the best of our knowledge, there exists no algorithm for converting a constituency-based parse tree to a set GF grammar rules.…”
Section: Gf Grammar Encodermentioning
confidence: 99%
“…A more advanced NLG system in this direction is described in [16], which works with ontologies annotated using the Attempto language and can generate a natural language description for workflows created by the systems built in the Phylotastic project 3 . The applications targeted by these systems are significantly different from NLG systems, whose main purpose is to generate high-quality natural language description of objects or reports, such as those reported in the recent AAAI conference [12,6,15].…”
Section: Introductionmentioning
confidence: 99%
“…Neural methods for DTT have shown the ability to produce fluent texts conditioned on input data in several domains [7,8], without relying on heavy manual work from field experts. They successfully combined ideas from machine translation, such as encoderdecoder architectures [9] and attention mechanisms [10,11], and from extractive summarization, such as the copy mechanism [12,13], often explicitly modeling the key-value structure of the input table [14][15][16][17][18][19]. However a common issue to deal with is that most of the commonly used models share the word-by-word representation of data in both input sequences and generated utterances; such schemes cannot be effective without a special, non-neural delexicalization phase that handles rare or unknown words, such as proper names, telephone numbers, or foreign words [20].…”
Section: Introductionmentioning
confidence: 99%