1997
DOI: 10.1214/aos/1069362389
|View full text |Cite
|
Sign up to set email alerts
|

Bandit problems with infinitely many arms

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

3
74
0

Year Published

2005
2005
2020
2020

Publication Types

Select...
4
3
2

Relationship

0
9

Authors

Journals

citations
Cited by 112 publications
(77 citation statements)
references
References 7 publications
3
74
0
Order By: Relevance
“…A strong probabilistic assumption that has been made in [17,19] to model such situations is that the mean-value of any unobserved arm is a random variable that follows some known distribution. More recently this assumption has been weakened in [108] with an assumption focussing on this distribution upper tail only.…”
Section: Unstructured Rewardsmentioning
confidence: 99%
“…A strong probabilistic assumption that has been made in [17,19] to model such situations is that the mean-value of any unobserved arm is a random variable that follows some known distribution. More recently this assumption has been weakened in [108] with an assumption focussing on this distribution upper tail only.…”
Section: Unstructured Rewardsmentioning
confidence: 99%
“…Examples include Berry et al (1997) who consider a Bayesian setting with infinite Bernoulli arms, and Kleinberg (2004Kleinberg ( , 2005 who consider applications to network routing and single-product pricing. However, most of these works are very problem-specific, and they are not applicable to our assortment optimization problem.…”
Section: Connections To Multiarmed Bandit Problemsmentioning
confidence: 99%
“…The infinite bandit problem [1,2] and to the secretary problem [7] are the closest formulations we know of, in that they require some form of stopping, but differ in aspects i) and ii).…”
Section: Related Workmentioning
confidence: 99%