Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing 2017
DOI: 10.18653/v1/d17-1065
|View full text |Cite
|
Sign up to set email alerts
|

Neural Response Generation via GAN with an Approximate Embedding Layer

Abstract: This paper presents a Generative Adversarial Network (GAN) to model singleturn short-text conversations, which trains a sequence-to-sequence (Seq2Seq) network for response generation simultaneously with a discriminative classifier that measures the differences between human-produced responses and machinegenerated ones. In addition, the proposed method introduces an approximate embedding layer to solve the non-differentiable problem caused by the sampling-based output decoding procedure in the Seq2Seq generativ… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
84
0

Year Published

2018
2018
2024
2024

Publication Types

Select...
4
3
2

Relationship

0
9

Authors

Journals

citations
Cited by 83 publications
(85 citation statements)
references
References 18 publications
1
84
0
Order By: Relevance
“…To evaluate our proposed method, we employ BLEU [19] to measure the quality of generated sentence by computing overlapping lexical units (e.g., unigram, bigram) with the reference sentence. We also consider three embedding-based metrics [6] (including Embedding Average, Embedding Greedy and Embedding Extreme) to evaluate our model, following several recent studies on text generation [24,30,35]. These three metrics compute the semantic similarity between the generated and reference answer according to the word embedding.…”
Section: Evaluation Metricsmentioning
confidence: 99%
“…To evaluate our proposed method, we employ BLEU [19] to measure the quality of generated sentence by computing overlapping lexical units (e.g., unigram, bigram) with the reference sentence. We also consider three embedding-based metrics [6] (including Embedding Average, Embedding Greedy and Embedding Extreme) to evaluate our model, following several recent studies on text generation [24,30,35]. These three metrics compute the semantic similarity between the generated and reference answer according to the word embedding.…”
Section: Evaluation Metricsmentioning
confidence: 99%
“…Dataset Following the previous studies (Vinyals and Le, 2015;Li et al, 2017;Xu et al, 2017), we choose the widely-used OpenSubtitles (Tiedemann, 2009) dataset to evaluate different methods. The OpenSubtitles dataset contains movie scripts organized by characters, where we follow Li et al (2016b) to retain subtitles containing 5-50 words.…”
Section: Open-domain Dialogue Learningmentioning
confidence: 99%
“…We argue that in a meaningful and coherent dialogue, the change of utterance order will lead to a low-quality dialogue. However, most existing neural-based dialogue systems either encode the full dialogue history (Li et al, 2017;Xu et al, 2017) or only the current utterance (Liu and Lane, 2018). None of them explicitly models the sequential order and studies its criticality to the dialogue learning problem.…”
Section: Introductionmentioning
confidence: 99%
“…We follow previous question generation work (Xu et al, 2017;Du et al, 2017) to use BLEU 5 (Papineni et al, 2002) and ROUGE-L (Lin, 2004) to measure the relevance between the generated question and the ground-truth one. To evaluate the diversity of the generated questions, we follow (Li et al, 2016a) to calculate Dist-n (n=1,2), which is the proportion of unique n-grams over the total number of n-grams in the generated questions for all passages, and to use the Ent-n (n=4) metric, which reflects how evenly the n-gram distribution is over all generated questions.…”
Section: Metricsmentioning
confidence: 99%