Proceedings of the 29th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval 2006
DOI: 10.1145/1148170.1148253
|View full text |Cite
|
Sign up to set email alerts
|

Large scale semi-supervised linear SVMs

Abstract: Large scale learning is often realistic only in a semi-supervised setting where a small set of labeled examples is available together with a large collection of unlabeled data. In many information retrieval and data mining applications, linear classifiers are strongly preferred because of their ease of implementation, interpretability and empirical performance. In this work, we present a family of semi-supervised linear support vector classifiers that are designed to handle partially-labeled sparse datasets wi… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
137
0
1

Year Published

2009
2009
2018
2018

Publication Types

Select...
3
3

Relationship

0
6

Authors

Journals

citations
Cited by 160 publications
(138 citation statements)
references
References 5 publications
0
137
0
1
Order By: Relevance
“…Researchers have therefore explored ways to achieve faster training times. For linear SVMs very efficient solvers are available which converge in a time which is linear in the number of examples [16,17,15]. Approximate solvers that can be trained in linear time without a significant loss of accuracy were also developed [18].…”
Section: Svm Training Algorithms and Softwarementioning
confidence: 99%
“…Researchers have therefore explored ways to achieve faster training times. For linear SVMs very efficient solvers are available which converge in a time which is linear in the number of examples [16,17,15]. Approximate solvers that can be trained in linear time without a significant loss of accuracy were also developed [18].…”
Section: Svm Training Algorithms and Softwarementioning
confidence: 99%
“…The proofs for the convergence are novel and generalize across a wide variety of Bregman divergences, allowing one to use a suitable divergence measure based on the application domain. The proposed framework has been empirically shown to outperform a variety of algorithms [Gao et al 2008[Gao et al , 2013Sindhwani and Keerthi 2006] in both semisupervised and transfer learning problems. More significantly, it can operate even in settings where there are no labeled data in the target domain, and the labeled data from the source domain are also no longer available.…”
Section: Discussionmentioning
confidence: 99%
“…Linear Support Vector Machine (S 3 VM) [Sindhwani and Keerthi 2006] for comparison. Finally, in Section 6.5, we report empirical results for transfer learning settings.…”
Section: Experimental Evaluationmentioning
confidence: 99%
See 2 more Smart Citations