2016 International Joint Conference on Neural Networks (IJCNN) 2016
DOI: 10.1109/ijcnn.2016.7727473
|View full text |Cite
|
Sign up to set email alerts
|

A Robust UCB scheme for active learning in regression from strategic crowds

Abstract: We study the problem of training an accurate linear regression model by procuring labels from multiple noisy crowd annotators, under a budget constraint. We propose a Bayesian model for linear regression in crowdsourcing and use variational inference for parameter estimation. To minimize the number of labels crowdsourced from the annotators, we adopt an active learning approach. In this specific context, we prove the equivalence of well-studied criteria of active learning like entropy minimization and expected… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2018
2018
2021
2021

Publication Types

Select...
2

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(3 citation statements)
references
References 18 publications
0
3
0
Order By: Relevance
“…Literature on bandits is too vast to be surveyed here. Starting with the early work of Auer et al (2002a) on multi-armed bandits (MAB), the field has seen progress in linear bandits (Abbasi-Yadkori et al 2011), contextual bandits (Chu et al 2011), as well as applications to recommendation (Li et al 2010), advertising (Chakrabarti et al 2008), web analytics (Tang et al 2013), crowdsourcing (Padmanabhan et al 2016), and mobile health (Tewari and Murphy 2017).…”
Section: Related Work and Our Contributionsmentioning
confidence: 99%
See 2 more Smart Citations
“…Literature on bandits is too vast to be surveyed here. Starting with the early work of Auer et al (2002a) on multi-armed bandits (MAB), the field has seen progress in linear bandits (Abbasi-Yadkori et al 2011), contextual bandits (Chu et al 2011), as well as applications to recommendation (Li et al 2010), advertising (Chakrabarti et al 2008), web analytics (Tang et al 2013), crowdsourcing (Padmanabhan et al 2016), and mobile health (Tewari and Murphy 2017).…”
Section: Related Work and Our Contributionsmentioning
confidence: 99%
“…There has been recent interest in developing bandit algorithms where the arm responses are samples from heavy-tailed distributions such as the works of Bubeck et al (2013), Medina and Yang (2016), Padmanabhan et al (2016). A point of confusion may arise here since these algorithms are also sometimes referred to as "robust" algorithms.…”
Section: Heavy-tailed Banditsmentioning
confidence: 99%
See 1 more Smart Citation