2005
DOI: 10.1007/11564096_23
|View full text |Cite
|
Sign up to set email alerts
|

Margin-Sparsity Trade-Off for the Set Covering Machine

Abstract: Abstract. We propose a new learning algorithm for the set covering machine and a tight data-compression risk bound that the learner can use for choosing the appropriate tradeoff between the sparsity of a classifier and the magnitude of its separating margin.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

1
3
0

Year Published

2006
2006
2012
2012

Publication Types

Select...
3
1
1

Relationship

2
3

Authors

Journals

citations
Cited by 6 publications
(4 citation statements)
references
References 8 publications
1
3
0
Order By: Relevance
“…This is an extension of the works in Laviolette et al (2005Laviolette et al ( , 2006, Shah (2006). Basically, in order to incorporate the notion of margin, we use two approaches.…”
Section: Is It Worthwhile To Investigate If Classifiers With Improvedsupporting
confidence: 63%
“…This is an extension of the works in Laviolette et al (2005Laviolette et al ( , 2006, Shah (2006). Basically, in order to incorporate the notion of margin, we use two approaches.…”
Section: Is It Worthwhile To Investigate If Classifiers With Improvedsupporting
confidence: 63%
“…In that setting, given an a priori defined vector i of indices, one can use the examples of the training set that do not correspond to any index of i to bound the risk of the classifier defined by i (and the training set S). Moreover, provided a prior distribution is given on the set of all possible vector of indices, one can extend such a bound to a bound which is valid simultaneously for all classifiers that can be reconstructed [8].…”
Section: A Uniform Risk Bound For Active Learningmentioning
confidence: 99%
“…This allows us to bound the generalisation error of the classifier using risk bounds based on data-depedent compression schemes. Considering using VS in a batch setting, from [14] we have the following:…”
Section: B Benchmark Datamentioning
confidence: 99%