2008
DOI: 10.1007/978-3-540-87479-9_8
|View full text |Cite
|
Sign up to set email alerts
|

Large Margin vs. Large Volume in Transductive Learning

Abstract: We focus on distribution-free transductive learning. In this setting the learning algorithm is given a 'full sample' of unlabeled points. Then, a training sample is selected uniformly at random from the full sample and the labels of the training points are revealed. The goal is to predict the labels of the remaining unlabeled points as accurately as possible. The full sample partitions the transductive hypothesis space into a finite number of equivalence classes. All hypotheses in the same equivalence class, g… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
15
0
1

Year Published

2010
2010
2015
2015

Publication Types

Select...
4
1

Relationship

0
5

Authors

Journals

citations
Cited by 6 publications
(16 citation statements)
references
References 1 publication
0
15
0
1
Order By: Relevance
“…Nonetheless, the margin assumption has a strong connection to statistical learning theory [30], many different labelings y's could share the same margin separation as mentioned in [28], [29]. Hence, finding the optimal labeling y for outlier detection based on the maximum margin criterion only is still very challenging.…”
Section: Manifold Regularizationmentioning
confidence: 99%
See 2 more Smart Citations
“…Nonetheless, the margin assumption has a strong connection to statistical learning theory [30], many different labelings y's could share the same margin separation as mentioned in [28], [29]. Hence, finding the optimal labeling y for outlier detection based on the maximum margin criterion only is still very challenging.…”
Section: Manifold Regularizationmentioning
confidence: 99%
“…Therefore, using merely the maximum margin criterion for outlier detection may be insufficient. Similar to [28], [29], we can also introduce a regularization term Ω(y) on y in (3), which gives some prior knowledge or confidence information on choosing the best labeling y. Then, we arrive at, min y∈ min w, ,…”
Section: B Outlier Detection With Maximum Volume Criterionmentioning
confidence: 99%
See 1 more Smart Citation
“…They also added the third principle of large volume. Large volume transductive principle was briefly treated in [8] for the case of hyperplanes and extended in [9]. In our approach, we deal with uncertainty in the input space instead of a hypothesis space.…”
Section: Introductionmentioning
confidence: 99%
“…*[295]. Η συνηθέστερη προσέγγιση ταυτίζεται με αυτή των Semi-supervised Support Vector Machines (S3VMs)[296], όπου συνήθως πραγματοποιείται εκμάθηση με βάση το περιθώριο κέρδους (margin-based approach),…”
unclassified