Proceedings of the 38th International ACM SIGIR Conference on Research and Development in Information Retrieval 2015
DOI: 10.1145/2766462.2767710
|View full text |Cite
|
Sign up to set email alerts
|

Learning Maximal Marginal Relevance Model via Directly Optimizing Diversity Evaluation Measures

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
37
2

Year Published

2016
2016
2022
2022

Publication Types

Select...
4
3

Relationship

0
7

Authors

Journals

citations
Cited by 56 publications
(39 citation statements)
references
References 29 publications
0
37
2
Order By: Relevance
“…Learn to match Prior state-of-the-art methods for diversifying search results include the Relational Learning-to-Rank framework (R-LTR) ) and the Perceptron Algorithm using Measures as Margins (PAMM) (Xia et al 2015). These prior methods either use a heuristic ranking model based on a predefined document similarity function, or they automatically learn a ranking model from predefined novelty features often based on cosine similarity.…”
Section: Learnmentioning
confidence: 99%
“…Learn to match Prior state-of-the-art methods for diversifying search results include the Relational Learning-to-Rank framework (R-LTR) ) and the Perceptron Algorithm using Measures as Margins (PAMM) (Xia et al 2015). These prior methods either use a heuristic ranking model based on a predefined document similarity function, or they automatically learn a ranking model from predefined novelty features often based on cosine similarity.…”
Section: Learnmentioning
confidence: 99%
“…With different definitions of the objective functions and optimization techniques, different diverse ranking algorithms have been proposed [21,22,24]. Xia et al [21] learn a maximal marginal relevance model via directly optimizing diversity evaluation measures. The authors in [22] utilize the neural tensor network to model the novelty relations.…”
Section: Search Results Diversificationmentioning
confidence: 99%
“…Well-known examples include xQuAD [37], RxQuAD [41], IA-select [2], PM-2 [13], and learning models for diversification [25,27,45]. Instead of modeling a set of aspects implicitly, these algorithms obtain a set of aspects either manually, for example, from aspect descriptions [9,11], or they create them directly from, for example, suggested queries generated by commercial search engines [13,37], or predefined aspect categories [40] or directly utilize the human judged labels of aspects for learning [25,27,45].…”
Section: Search Results Diversificationmentioning
confidence: 99%