2006
DOI: 10.1007/11776420_6
|View full text |Cite
|
Sign up to set email alerts
|

Stable Transductive Learning

Abstract: Abstract. We develop a new error bound for transductive learning algorithms. The slack term in the new bound is a function of a relaxed notion of transductive stability, which measures the sensitivity of the algorithm to most pairwise exchanges of training and test set points. Our bound is based on a novel concentration inequality for symmetric functions of permutations. We also present a simple sampling technique that can estimate, with high probability, the weak stability of transductive learning algorithms … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
28
0

Year Published

2008
2008
2015
2015

Publication Types

Select...
3
2
1

Relationship

2
4

Authors

Journals

citations
Cited by 27 publications
(28 citation statements)
references
References 15 publications
(25 reference statements)
0
28
0
Order By: Relevance
“…2 we give a formal definition of the transductive regression setting and the notion of stability for transduction. Our bounds generalize the stability bounds given by Bousquet and Elisseeff (2002) for the inductive setting and extend to regression the stabilitybased transductive classification bounds of (El-Yaniv & Pechyony, 2006). Standard concentration bounds such as McDiarmid's bound (McDiarmid, 1989) cannot be readily applied to the transductive regression setting since the points are not drawn independently but uniformly without replacement from a finite set.…”
Section: Introductionmentioning
confidence: 77%
See 1 more Smart Citation
“…2 we give a formal definition of the transductive regression setting and the notion of stability for transduction. Our bounds generalize the stability bounds given by Bousquet and Elisseeff (2002) for the inductive setting and extend to regression the stabilitybased transductive classification bounds of (El-Yaniv & Pechyony, 2006). Standard concentration bounds such as McDiarmid's bound (McDiarmid, 1989) cannot be readily applied to the transductive regression setting since the points are not drawn independently but uniformly without replacement from a finite set.…”
Section: Introductionmentioning
confidence: 77%
“…Standard concentration bounds such as McDiarmid's bound (McDiarmid, 1989) cannot be readily applied to the transductive regression setting since the points are not drawn independently but uniformly without replacement from a finite set. Instead, a generalization of McDiarmid's bound that holds for random variables sampled without replacement is used, as in (El-Yaniv & Pechyony, 2006). Sec.…”
Section: Introductionmentioning
confidence: 99%
“…As illustrated in [5,1], the graph Laplacian regularization from the perspective of the graph smoothing functional is equivalent to a kernel regularization of a reproducing kernel Hilbert space (RKHS). However, as shown in [13], the algorithms considered in [1,5] have the deficiency that they always have unnecessary constrains for the labels of the vertices V. To remedy this deficiency, we introduce the following graph-based quantity…”
Section: Multi-graph Semi-supervised Classificationmentioning
confidence: 99%
“…However, as pointed in [19], the algorithms considered in [4,14] have unnecessary constrains for the labels of the vertices. In order to remedy this deficiency, we introduce the following graph-based quantity…”
Section: Graph-based Semi-supervised Classificationmentioning
confidence: 99%
“…Firstly, we design the graph-based semi-supervised algorithm without any constrains on class proportion (see [19]). It has long been noticed that constraining the class proportions on unlabeled data can be important for semi-supervised learning.…”
Section: Introductionmentioning
confidence: 99%