Proceedings of the 55th Annual Meeting of the Association For Computational Linguistics (Volume 2: Short Papers) 2017
DOI: 10.18653/v1/p17-2036
|View full text |Cite
|
Sign up to set email alerts
|

How to Make Context More Useful? An Empirical Study on Context-Aware Neural Conversational Models

Abstract: Generative conversational systems are attracting increasing attention in natural language processing (NLP). Recently, researchers have noticed the importance of context information in dialog processing, and built various models to utilize context. However, there is no systematic comparison to analyze how to use context effectively. In this paper, we conduct an empirical study to compare various models and investigate the effect of context information in dialog systems. We also propose a variant that explicitly… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
88
0

Year Published

2018
2018
2021
2021

Publication Types

Select...
5
3
2

Relationship

0
10

Authors

Journals

citations
Cited by 114 publications
(91 citation statements)
references
References 12 publications
0
88
0
Order By: Relevance
“…Therefore some researchers try to define the relevance of the context by the similarity measure. For example, Tian et al (Tian et al, 2017) proposed a weighted sequence (WSeq) attention model for HRED, using the cosine similarity to measure the degree of the relevance. Specifically, they first calculate the cosine similarity between the post embedding and each context sentence embedding, and then use this normalized similarity score as the attention weight.…”
Section: Related Workmentioning
confidence: 99%
“…Therefore some researchers try to define the relevance of the context by the similarity measure. For example, Tian et al (Tian et al, 2017) proposed a weighted sequence (WSeq) attention model for HRED, using the cosine similarity to measure the degree of the relevance. Specifically, they first calculate the cosine similarity between the post embedding and each context sentence embedding, and then use this normalized similarity score as the attention weight.…”
Section: Related Workmentioning
confidence: 99%
“…In a long conversation, the current context or topic of chatting can change to another context/topic. To equip the HRED model to handle this change, [42] proposed to use measure context-query relevance. In particular, they used cosine similarity to measure relevance.…”
Section: ) Additional Embeddingsmentioning
confidence: 99%
“…Recent years there are growing interests on research about conversation response generation and ranking with deep learning and reinforcement learning [2,14,15,26,27,[39][40][41]. Existing work includes retrieval-based methods [9,16,36,39,41,48] and generation-based methods [2,5,15,21,23,26,27,30,32]. Sordoni et al [27] proposed a neural network architecture for response generation that is both context-sensitive and data-driven utilizing the Recurrent Neural Network Language Model architecture.…”
Section: Related Workmentioning
confidence: 99%