2018
DOI: 10.1007/978-3-319-91947-8_15
|View full text |Cite
|
Sign up to set email alerts
|

T2S: An Encoder-Decoder Model for Topic-Based Natural Language Generation

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
7
0

Year Published

2020
2020
2020
2020

Publication Types

Select...
1
1

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(7 citation statements)
references
References 19 publications
0
7
0
Order By: Relevance
“…Many works have been proposed to utilize the static topic information to improve the generation performance. Chen et al (2016) and Ou et al (2018) propose to represent the topic for each sentence as a learnable vector. The topic is predicted by the input sentence and is used to enhance the generating phase.…”
Section: Company Topicmentioning
confidence: 99%
See 4 more Smart Citations
“…Many works have been proposed to utilize the static topic information to improve the generation performance. Chen et al (2016) and Ou et al (2018) propose to represent the topic for each sentence as a learnable vector. The topic is predicted by the input sentence and is used to enhance the generating phase.…”
Section: Company Topicmentioning
confidence: 99%
“…Xing et al (2017) and Zhang et al (2016) detect the topic representation by applying a pre-trained LDA model on the input sequence. Moreover, Choudhary et al (2017) and Ou et al (2018) predict the topic representation directly from the input sequence using Recurrent Neural Networks (RNN). All the above methods make an assumption that during generation the topic does not change so as to make the problem tractable, which scarifies the advantage of modeling the dynamic nature of topic information.…”
Section: Company Topicmentioning
confidence: 99%
See 3 more Smart Citations