Proceedings of the Twenty-Ninth Annual ACM-SIAM Symposium on Discrete Algorithms 2018
DOI: 10.1137/1.9781611975031.160
|View full text |Cite
|
Sign up to set email alerts
|

A Nearly Instance Optimal Algorithm for Top-k Ranking under the Multinomial Logit Model

Abstract: We study the active learning problem of top-k ranking from multi-wise comparisons under the popular multinomial logit model. Our goal is to identify the topk items with high probability by adaptively querying sets for comparisons and observing the noisy output of the most preferred item from each comparison. To achieve this goal, we design a new active ranking algorithm without using any information about the underlying items' preference scores. We also establish a matching lower bound on the sample complexity… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
33
0

Year Published

2019
2019
2022
2022

Publication Types

Select...
3
2
2

Relationship

1
6

Authors

Journals

citations
Cited by 24 publications
(33 citation statements)
references
References 18 publications
0
33
0
Order By: Relevance
“…Rank aggregation has been an active research problem in recent years (see, e.g., [16,18,19,24,25,31,46,51] and references therein) that finds many applications to social choice, tournament play, search rankings, advertisement placement, etc. With the advent of crowdsourcing services, one can easily ask crowd workers to conduct comparisons among a few objects in an online fashion at a low cost [15,17].…”
Section: Main Contributionmentioning
confidence: 99%
See 1 more Smart Citation
“…Rank aggregation has been an active research problem in recent years (see, e.g., [16,18,19,24,25,31,46,51] and references therein) that finds many applications to social choice, tournament play, search rankings, advertisement placement, etc. With the advent of crowdsourcing services, one can easily ask crowd workers to conduct comparisons among a few objects in an online fashion at a low cost [15,17].…”
Section: Main Contributionmentioning
confidence: 99%
“…Finally, (18) in step 3 of the algorithm has a closed-form solution, obtained by by writing down the KKT condition. That is,…”
Section: Optimization In Algorithmmentioning
confidence: 99%
“…However above has O n k many optimization variables (precisely E θ 1 [N S ]s), so we instead solve the dual LP to reach the desired bound. Lastly the Ω n k ln 1 δ term in the lower bound arises as any learning algorithm must at least test each item a constant number of times via k-wise subset plays before judging it optimality which is the bare minimum sample complexity the learner has to incur Chen et al [2018]. The complete proof is given in Appendix C.1.…”
Section: Lower Bound For Winner Feedbackmentioning
confidence: 99%
“…This is a basic search problem motivated by applications in recommender systems and information retrieval Hofmann et al [2013], Radlinski et al [2008], crowdsourced ranking Chen et al [2013], tournament design Graepel and Herbrich [2006], etc. It has received recent attention in the online learning community, primarily under the rubric of dueling bandits (e.g., Yue et al [2012] and online ranking in the Plackett-Luce (PL) discrete choice model Chen et al [2018], Saha and Gopalan [2019], Ren et al [2018].…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation