Proceedings of the 33rd International ACM SIGIR Conference on Research and Development in Information Retrieval 2010
DOI: 10.1145/1835449.1835495
|View full text |Cite
|
Sign up to set email alerts
|

Active learning for ranking through expected loss optimization

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

1
63
0

Year Published

2011
2011
2016
2016

Publication Types

Select...
4
2
2

Relationship

0
8

Authors

Journals

citations
Cited by 63 publications
(64 citation statements)
references
References 17 publications
1
63
0
Order By: Relevance
“…Most of the other ranking algorithms such as Rank SVM [9] and Rank Boost [7] suggests to add the most relevant pairs of documents to the training set, the document's predicted relevance scores are very close under the current ranking models. In the terms of binary relevance, greedy algorithm [1] is proposed which selects the document which differentiates two different ranking systems in terms of average precision. The comparison of relevant document selection methodologies in learning to rank are found in [8].…”
Section: Learning To Rank Surveymentioning
confidence: 99%
“…Most of the other ranking algorithms such as Rank SVM [9] and Rank Boost [7] suggests to add the most relevant pairs of documents to the training set, the document's predicted relevance scores are very close under the current ranking models. In the terms of binary relevance, greedy algorithm [1] is proposed which selects the document which differentiates two different ranking systems in terms of average precision. The comparison of relevant document selection methodologies in learning to rank are found in [8].…”
Section: Learning To Rank Surveymentioning
confidence: 99%
“…Margin-based selection criteria seek pairs of instances whose estimated ranks are nearest under the current model [5,37], while others seek examples expected to most influence the ranking function [9] or minimize expected loss [23]. We explore the suitability of margin-based criteria for attribute training, and we propose a new formulation that accounts for diversity.…”
Section: Diversity In Active Learningmentioning
confidence: 99%
“…However, there is relatively little work on active learning for ranking tasks. One notable exception is [15], who use the notion of Expected Loss Optimization (ELO). Another work in this area is [4], whose aim was to identify the most interesting substances for drug screening using a minimum number of tests.…”
Section: Related Work In Other Scientific Areasmentioning
confidence: 99%