2009
DOI: 10.1587/transinf.e92.d.1338
|View full text |Cite
|
Sign up to set email alerts
|

Recent Advances and Trends in Large-Scale Kernel Methods

Abstract: SUMMARYKernel methods such as the support vector machine are one of the most successful algorithms in modern machine learning. Their advantage is that linear algorithms are extended to non-linear scenarios in a straightforward way by the use of the kernel trick. However, naive use of kernel methods is computationally expensive since the computational complexity typically scales cubically with respect to the number of training samples. In this article, we review recent advances in the kernel methods, with empha… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
5
0

Year Published

2010
2010
2016
2016

Publication Types

Select...
5
3
1

Relationship

0
9

Authors

Journals

citations
Cited by 19 publications
(5 citation statements)
references
References 64 publications
(62 reference statements)
0
5
0
Order By: Relevance
“…[25]. Therefore WNN-GIP is more efficient than KBMF2K, since the total time complexity of each iteration in the variational approximation method used in KBMF2K is , where is the subspace dimensionality used in the method.…”
Section: Discussionmentioning
confidence: 99%
“…[25]. Therefore WNN-GIP is more efficient than KBMF2K, since the total time complexity of each iteration in the variational approximation method used in KBMF2K is , where is the subspace dimensionality used in the method.…”
Section: Discussionmentioning
confidence: 99%
“…So, to make a RLS prediction using the Kronecker product kernel we only need to perform the two eigendecompositions and some matrix multiplications, bringing the runtime down to O(n d 3 + nt 3 ). The efficiency of this computation could be further improved yielding a quadratic computational complexity by applying recent techniques for large scale kernel methods for computing the two kernel decompositions (Kashima et al, 2009b;Wu et al, 2006).…”
Section: Rls-kron Classifiermentioning
confidence: 99%
“…[21]). And in the future, we will consider the issue of speeding up decoding with structured models [28], [32], [45].…”
Section: Discussionmentioning
confidence: 99%
“…We will combine our method with other techniques that provide sparse solutions, for example, kernel methods on a budget (Dekel and Singer, 2007;Dekel et al, 2008;Orabona et al, 2008) or kernel approximation (surveyed in Kashima et al (2009)). It is also easy to combine our method with SVMs with partial kernel expansion (Goldberg and Elhadad, 2008), which will yield slower but more space-efficient classifiers.…”
Section: Discussionmentioning
confidence: 99%