Proceedings of the 2nd International Workshop on Natural Language Generation and the Semantic Web (WebNLG 2016) 2016
DOI: 10.18653/v1/w16-3511
|View full text |Cite
|
Sign up to set email alerts
|

Generating Paraphrases from DBPedia using Deep Learning

Abstract: Recent deep learning approaches to Natural Language Generation mostly rely on sequence-to-sequence models. In these approaches, the input is treated as a sequence whereas in most cases, input to generation usually is either a tree or a graph. In this paper, we describe an experiment showing how enriching a sequential input with structural information improves results and help support the generation of paraphrases.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
9
0

Year Published

2018
2018
2020
2020

Publication Types

Select...
3
2
1

Relationship

0
6

Authors

Journals

citations
Cited by 10 publications
(10 citation statements)
references
References 7 publications
0
9
0
Order By: Relevance
“…There is a large body of literature that uses the encoder-decoder framework from machine translation (Cho et al, 2014;Sutskever et al, 2014;Bahdanau et al, 2014) for NLG (Sleimi and Gardent, 2016;Gardent et al, 2017;Chisholm et al, 2017;Mei et al, 2016;Lebret et al, 2016;Wiseman et al, 2017;Vougiouklis et al, 2018a;Liu et al, 2018;Li and Wan, 2018;Gehrmann et al, 2018;Yeh et al, 2018). The decoder, typically a multi-gated Recurrent Neural Network (RNN), formed of either Long Short-Term Memory cells (Hochreiter and Schmidhuber, 1997) or Gated Recurrent Units (Cho et al, 2014), is conditioned on a set of structured records and acts as a language model.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…There is a large body of literature that uses the encoder-decoder framework from machine translation (Cho et al, 2014;Sutskever et al, 2014;Bahdanau et al, 2014) for NLG (Sleimi and Gardent, 2016;Gardent et al, 2017;Chisholm et al, 2017;Mei et al, 2016;Lebret et al, 2016;Wiseman et al, 2017;Vougiouklis et al, 2018a;Liu et al, 2018;Li and Wan, 2018;Gehrmann et al, 2018;Yeh et al, 2018). The decoder, typically a multi-gated Recurrent Neural Network (RNN), formed of either Long Short-Term Memory cells (Hochreiter and Schmidhuber, 1997) or Gated Recurrent Units (Cho et al, 2014), is conditioned on a set of structured records and acts as a language model.…”
Section: Related Workmentioning
confidence: 99%
“…The decoder, typically a multi-gated Recurrent Neural Network (RNN), formed of either Long Short-Term Memory cells (Hochreiter and Schmidhuber, 1997) or Gated Recurrent Units (Cho et al, 2014), is conditioned on a set of structured records and acts as a language model. Adaptation of such systems have shown great potential at tackling various aspects of triples-to-text tasks ranging from microplanning by Gardent et al (2017) to generation of paraphrases by Sleimi and Gardent (2016). Pointer-generator networks have been brought up recently by Gu et al (2016) and See et al (2017) as alterations of the original pointer architecture proposed by Vinyals et al (2015).…”
Section: Related Workmentioning
confidence: 99%
“…Similar sequence-to-sequence based approaches have been used in the past for text paraphrasing task (Brad and Rebedea, 2017;Sleimi and Gardent, 2016), which shares its similarities with the task we are set to solve. This gives us a reason to believe that sequence-to-sequence approach is a viable way of implementing the apprentice.…”
Section: The Apprenticementioning
confidence: 99%
“…This attention comes from the great number of published works such as (Cimiano et al, 2013;Duma and Klein, 2013;Ell and Harth, 2014;Biran and McKeown, 2015) which used RDF as an input data and achieved promising results. Moreover, the works published in the WebNLG (Colin et al, 2016) challenge, which used deep learning techniques such as (Sleimi and Gardent, 2016;, also contributed to this interest. RDF has also been showing promising benefits to the generation of benchmarks for evaluating NLG systems, e.g., (Gardent et al, 2017;Mohammed et al, 2016;Schwitter et al, 2004;Hewlett et al, 2005;Sun and Mellish, 2006).…”
Section: Nlg For the Web Of Datamentioning
confidence: 99%