Proceedings of the 26th Annual International Conference on Machine Learning 2009
DOI: 10.1145/1553374.1553390
|View full text |Cite
|
Sign up to set email alerts
|

Robust bounds for classification via selective sampling

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
40
0

Year Published

2012
2012
2023
2023

Publication Types

Select...
4
2
2

Relationship

1
7

Authors

Journals

citations
Cited by 37 publications
(40 citation statements)
references
References 9 publications
0
40
0
Order By: Relevance
“…. , K, a weight vector w i ∈ R d and a correlation matrix A i ∈ R d×d , and operates similarly to 2nd-order (or ridge regression)-like algorithms (Hoerl & Kennard, 1970;Azoury & Warmuth, 2001;CesaBianchi et al, 2005) (see also, e.g., (Strehl & Littman, 2008;Crammer et al, 2009a;Cesa-Bianchi et al, 2009;Dekel et al, 2010)). The weight vectors are initialized to zero, and the matrices A i to (1 + α) 2 times the identity matrix I of size d. For brevity, we denote by A a single matrix of size dK × dK defined to be the block-diagonal matrix A = diag(A 1 , A 2 , .…”
Section: The New Bandit Algorithmmentioning
confidence: 99%
“…. , K, a weight vector w i ∈ R d and a correlation matrix A i ∈ R d×d , and operates similarly to 2nd-order (or ridge regression)-like algorithms (Hoerl & Kennard, 1970;Azoury & Warmuth, 2001;CesaBianchi et al, 2005) (see also, e.g., (Strehl & Littman, 2008;Crammer et al, 2009a;Cesa-Bianchi et al, 2009;Dekel et al, 2010)). The weight vectors are initialized to zero, and the matrices A i to (1 + α) 2 times the identity matrix I of size d. For brevity, we denote by A a single matrix of size dK × dK defined to be the block-diagonal matrix A = diag(A 1 , A 2 , .…”
Section: The New Bandit Algorithmmentioning
confidence: 99%
“…Specifically, there are two kinds of settings for online active learning, selective sampling setting (Cavallanti et al 2009;Cesa-Bianchi et al 2009;Dekel et al 2010;Orabona and CesaBianchi 2011) and label efficient learning setting. We summarize their differences in several aspects.…”
Section: Online Active Learningmentioning
confidence: 99%
“…In selective sampling 0 ≤ κ ≤ 1 is a parameter of the algorithm, n is the number of steps with a margin less than , and the bound holds for any for any 0 < < 1. -Bianchi et al (2009) analyze a learning setting which is complementary to the hybrid setting introduced in this paper. They consider the selective sampling problem in which inputs are arbitrarily generated by an adversary while labels a noisy observations of a linear hypothesis.…”
Section: Related Workmentioning
confidence: 99%