Proceedings of the 2018 World Wide Web Conference on World Wide Web - WWW '18 2018
DOI: 10.1145/3178876.3186077
|View full text |Cite
|
Sign up to set email alerts
|

Hierarchical Variational Memory Network for Dialogue Generation

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
51
0

Year Published

2018
2018
2023
2023

Publication Types

Select...
3
3
2

Relationship

1
7

Authors

Journals

citations
Cited by 78 publications
(52 citation statements)
references
References 25 publications
1
51
0
Order By: Relevance
“…For the purpose of exposition, we have chosen to avoid open questions that do not constrain user response for now. Even interpreting user responses to such questions is considered a challenging task [5]. learning, optimizing the reward of shorter turns and successful recommendations based on the FM's estimation of user preferred items and attributes, and the dialogue history.…”
Section: Introductionmentioning
confidence: 99%
“…For the purpose of exposition, we have chosen to avoid open questions that do not constrain user response for now. Even interpreting user responses to such questions is considered a challenging task [5]. learning, optimizing the reward of shorter turns and successful recommendations based on the FM's estimation of user preferred items and attributes, and the dialogue history.…”
Section: Introductionmentioning
confidence: 99%
“…Neural dialogue generation aims at generating natural-sounding replies automatically to exchange information, e.g., knowledge [7,42,58]. As a core component of both task-oriented and non-taskoriented dialogue systems, neural dialogue generation has received a lot attention in recent years [1,3,36,58].…”
Section: Neural Dialogue Generationmentioning
confidence: 99%
“…Serban et al [38] utilize a latent variable at the sub-sequence level in a hierarchical setting. Chen et al [7] add a hierarchical structure and a variational memory module into a neural encoder-decoder network. However, all these latent vectors and latent memories are unexplainable, which makes it challenging to verify the effectiveness of dialogue state tracking.…”
Section: Dialogue State Trackingmentioning
confidence: 99%
See 1 more Smart Citation
“…The recurrent neural networks (RNNs) based sequence-to-sequence (Seq2Seq) architecture enables us to adopt natural text generation in a scalable and end-to-end manner. Many applications such as dialogue systems [2,11,13,25,26,32,35], document summarization [21] and poem generation [37,38] have been developed in the paradigm of Seq2Seq architecture.…”
Section: Introductionmentioning
confidence: 99%