Proceedings of the 27th ACM International Conference on Information and Knowledge Management 2018
DOI: 10.1145/3269206.3272019
|View full text |Cite
|
Sign up to set email alerts
|

Multi-Task Learning for Email Search Ranking with Auxiliary Query Clustering

Abstract: User information needs vary significantly across different tasks, and therefore their queries will also differ considerably in their expressiveness and semantics. Many studies have been proposed to model such query diversity by obtaining query types and building query-dependent ranking models. These studies typically require either a labeled query dataset or clicks from multiple users aggregated over the same document. These techniques, however, are not applicable when manual query labeling is not viable, and … Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
14
0

Year Published

2019
2019
2022
2022

Publication Types

Select...
3
2

Relationship

2
3

Authors

Journals

citations
Cited by 14 publications
(14 citation statements)
references
References 33 publications
0
14
0
Order By: Relevance
“…Ai et al [2] conducted a thorough survey of search intent by analyzing user logs of email search. Shen et al [39] categorized email search queries into different clusters before adding the query cluster information to improve email ranking.…”
Section: Email Searchmentioning
confidence: 99%
See 2 more Smart Citations
“…Ai et al [2] conducted a thorough survey of search intent by analyzing user logs of email search. Shen et al [39] categorized email search queries into different clusters before adding the query cluster information to improve email ranking.…”
Section: Email Searchmentioning
confidence: 99%
“…Recently, deep neural networks (DNNs) have shown great success in learning-to-rank tasks. They significantly improve the performance of search engines in the presence of large-scale query logs in both web search [19] and email settings [39,45,51]. The advantages of DNNs over traditional models are mainly two-fold: (1) DNNs have strong power to learn embedded representations from sparse features, including words [33] and characters [6].…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…Enterprise search is also closely related to personal search (e.g., email search), as both deal with searching in private or access controlled corpora [2,8,12,20,26,39,44]. Even though some success has been found using time-based approaches for personal search [12], relevance-based ranking arising from learning-to-rank deep neural network models has become increasingly popular [39,46] as the sizes of private corpora increase [20]. However, to the best of our knowledge, our work is the first study on applying deep neural networks specifically in the enterprise search setting.…”
Section: Enterprise Searchmentioning
confidence: 99%
“…To preserve privacy, the inputs are k-anonymized and only query and document n-grams that are frequent in the entire corpus are retained. For more details about the specific features and the anonymization process used in a similar setting see [39,46]. The query features and sparse document features are passed through a single embedding layer, whereas the dense features are left as they are.…”
Section: Input Embeddingsmentioning
confidence: 99%