Proceedings of the 2017 ACM on Conference on Information and Knowledge Management 2017
DOI: 10.1145/3132847.3133010
|View full text |Cite
|
Sign up to set email alerts
|

Learning to Attend, Copy, and Generate for Session-Based Query Suggestion

Abstract: Users try to articulate their complex information needs during search sessions by reformulating their queries. To make this process more e ective, search engines provide related queries to help users in specifying the information need in their search process. In this paper we propose a customized sequence-to-sequence model for sessionbased query suggestion. In our model, we employ a query-aware a ention mechanism to capture the structure of the session context. is enables us to control the scope of the session… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

2
81
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
5
3

Relationship

0
8

Authors

Journals

citations
Cited by 99 publications
(83 citation statements)
references
References 44 publications
2
81
0
Order By: Relevance
“…As we filtered out queries that do not have any associated clicks when constructing the experiment dataset, we lost some longer tasks; otherwise our test data distribution is similar to [8].…”
Section: Abalation Analysis and Discussionmentioning
confidence: 99%
“…As we filtered out queries that do not have any associated clicks when constructing the experiment dataset, we lost some longer tasks; otherwise our test data distribution is similar to [8].…”
Section: Abalation Analysis and Discussionmentioning
confidence: 99%
“…In this section, we evaluate the performance of the aforementioned models using multiple metrics for each of the two tasks: query reformulation and ranking. The metrics used here are largely inspired from [11], and we discuss these below briefly. Towards the end of the section we also provide some qualitative results.…”
Section: Evaluation and Resultsmentioning
confidence: 99%
“…, w lc }. We use LSTMs [15] to model the sequences, owing to their demonstrated capabilities in modeling various natural language tasks, ranging from machine translation [27] to query suggestion [11].…”
Section: Notationmentioning
confidence: 99%
“…One such class of approaches is referred to as sequence-to-sequence (seq2seq) modeling. These techniques have been successfully applied to tasks ranging from machine translation [22] to query suggestion [8]. Motivated by their wide-ranging applicability, we use variants of seq2seq models to recommend future commands.…”
Section: Command Recommendation Modelsmentioning
confidence: 99%