Our system is currently under heavy load due to increased usage. We're actively working on upgrades to improve performance. Thank you for your patience.
Proceedings of the 24th ACM International on Conference on Information and Knowledge Management 2015
DOI: 10.1145/2806416.2806493
|View full text |Cite
|
Sign up to set email alerts
|

A Hierarchical Recurrent Encoder-Decoder for Generative Context-Aware Query Suggestion

Abstract: Users may strive to formulate an adequate textual query for their information need. Search engines assist the users by presenting query suggestions. To preserve the original search intent, suggestions should be context-aware and account for the previous queries issued by the user. Achieving context awareness is challenging due to data sparsity. We present a probabilistic suggestion model that is able to account for sequences of previous queries of arbitrary lengths. Our novel hierarchical recurrent encoder-dec… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

3
375
0

Year Published

2017
2017
2020
2020

Publication Types

Select...
4
4
1

Relationship

1
8

Authors

Journals

citations
Cited by 411 publications
(383 citation statements)
references
References 40 publications
3
375
0
Order By: Relevance
“…The hierarchical RNN encoder in our model consists of two layers of RNNs (El Hihi and Bengio, 1995;Sordoni et al, 2015a). The lower-level RNN, the utterance-level encoder, takes as input words from the dialogue, and produces a vector output at the end of each utterance.…”
Section: An Automatic Dialogue Evaluation Model (Adem)mentioning
confidence: 99%
“…The hierarchical RNN encoder in our model consists of two layers of RNNs (El Hihi and Bengio, 1995;Sordoni et al, 2015a). The lower-level RNN, the utterance-level encoder, takes as input words from the dialogue, and produces a vector output at the end of each utterance.…”
Section: An Automatic Dialogue Evaluation Model (Adem)mentioning
confidence: 99%
“…Sordoni et al (2015) use HRED to summarize a single representation from both the current and previous sentences, which limits itself to (1) it is only applicable to encoder-decoder framework without attention model, (2) the representation can only be used to initialize decoder. In contrast, we use HRED to summarize the previous sentences alone, which provides additional cross-sentence context for NMT.…”
Section: Related Workmentioning
confidence: 99%
“…1 In this paper, we propose a cross-sentence context-aware NMT model, which considers the influence of previous source sentences in the same document. 2 Specifically, we employ a hierarchy of Recurrent Neural Networks (RNNs) to summarize the cross-sentence context from source-side previous sentences, which deploys an additional documentlevel RNN on top of the sentence-level RNN encoder (Sordoni et al, 2015). After obtaining the global context, we design several strategies to integrate it into NMT to translate the current sentence:…”
Section: Introductionmentioning
confidence: 99%
“…Traditional way for text generation (Genest and Lapalme, 2012;Yan et al, 2011) mainly focus on grammars, templates, and so on. But it is usually complicated to make every part of the system work and cooperate perfectly following the traditional techniques, while end-to-end generation systems nowadays, like the ones within encoder-decoder framework Sordoni et al, 2015), have distinct architectures and achieve promising performances.…”
Section: Related Workmentioning
confidence: 99%