“…There is a large body of literature that uses the encoder-decoder framework from machine translation (Cho et al, 2014;Sutskever et al, 2014;Bahdanau et al, 2014) for NLG (Sleimi and Gardent, 2016;Gardent et al, 2017;Chisholm et al, 2017;Mei et al, 2016;Lebret et al, 2016;Wiseman et al, 2017;Vougiouklis et al, 2018a;Liu et al, 2018;Li and Wan, 2018;Gehrmann et al, 2018;Yeh et al, 2018). The decoder, typically a multi-gated Recurrent Neural Network (RNN), formed of either Long Short-Term Memory cells (Hochreiter and Schmidhuber, 1997) or Gated Recurrent Units (Cho et al, 2014), is conditioned on a set of structured records and acts as a language model.…”