Our system is currently under heavy load due to increased usage. We're actively working on upgrades to improve performance. Thank you for your patience.
2017
DOI: 10.1609/aaai.v31i1.10983
|View full text |Cite
|
Sign up to set email alerts
|

A Hierarchical Latent Variable Encoder-Decoder Model for Generating Dialogues

Abstract: Sequential data often possesses hierarchical structures with complex dependencies between sub-sequences, such as found between the utterances in a dialogue. To model these dependencies in a generative framework, we propose a neural network-based generative architecture, with stochastic latent variables that span a variable number of time steps. We apply the proposed model to the task of dialogue response generation and compare it with other recent neural-network architectures. We evaluate the model performance… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
140
0

Year Published

2018
2018
2022
2022

Publication Types

Select...
4
3
1

Relationship

0
8

Authors

Journals

citations
Cited by 349 publications
(140 citation statements)
references
References 21 publications
0
140
0
Order By: Relevance
“…Li et al (2015) measure the reply diversity by calculating the proportion of distinct unigrams and bigrams. Besides, Serban et al (2017) and Mou et al (2016) use entropy to measure the information of generated replies; such metric is independent of the query and groundtruth, and can be easily cheated if used alone. Lowe et al (2017) propose a neural network-based metric learned in a supervised fashion.…”
Section: Evaluation For Dialog Systemsmentioning
confidence: 99%
“…Li et al (2015) measure the reply diversity by calculating the proportion of distinct unigrams and bigrams. Besides, Serban et al (2017) and Mou et al (2016) use entropy to measure the information of generated replies; such metric is independent of the query and groundtruth, and can be easily cheated if used alone. Lowe et al (2017) propose a neural network-based metric learned in a supervised fashion.…”
Section: Evaluation For Dialog Systemsmentioning
confidence: 99%
“…2, we use a standard recurrent neural network based decoder with GRU cells. Such a decoder has been used successfully for various natural language generation tasks including text conversation systems (Serban et al 2016b). We also implemented a version where we couple the decoder with an attention model which learns to attend upon…”
Section: Decoder For Generating Text Responsesmentioning
confidence: 99%
“…Though there has been recent work (Serban et al 2016b;Yao et al 2016;Serban et al 2016a) with different conversation datasets (Lowe et al 2015;Vinyals and Le 2015;Ritter, Cherry, and Dolan 2010), the mode of interaction there is limited to text conversations only. While multimodal, human-to-human conversation transcripts (e.g.…”
Section: Introductionmentioning
confidence: 99%
“…We decompose a dialogue into two levels: sequences of utterances and sub-sequences of words, as in (Serban et al 2017). Let w 1 , .…”
Section: Modelmentioning
confidence: 99%
“…Recently (Serban et al 2017) proposed the Variational Hierarchical Recurrent Encoder-Decoder (VHRED) model which borrows the idea from conditional variational autoencoders (CVAE) (Sohn, Lee, and Yan 2015) to model multiturn dialogue generation. However, the approximated posterior distribution needs to be specified and the prior distribution is not guaranteed to be the same as the marginal posterior distribution in the global optimum.…”
Section: Introductionmentioning
confidence: 99%