2010
DOI: 10.1186/1471-2105-11-309
|View full text |Cite
|
Sign up to set email alerts
|

L2-norm multiple kernel learning and its application to biomedical data fusion

Abstract: BackgroundThis paper introduces the notion of optimizing different norms in the dual problem of support vector machines with multiple kernels. The selection of norms yields different extensions of multiple kernel learning (MKL) such as L∞, L1, and L2 MKL. In particular, L2 MKL is a novel method that leads to non-sparse optimal kernel coefficients, which is different from the sparse kernel coefficients optimized by the existing L∞ MKL method. In real biomedical applications, L2 MKL may have more advantages over… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
112
0

Year Published

2012
2012
2020
2020

Publication Types

Select...
4
3
2

Relationship

1
8

Authors

Journals

citations
Cited by 97 publications
(112 citation statements)
references
References 33 publications
(47 reference statements)
0
112
0
Order By: Relevance
“…Moreover, the clinical kernel function not only outperformed the linear and polynomial kernel when using the LS-SVM classifier. It also performed well in combination with the regular SVM classifier [42].…”
Section: Results For Clinical Datamentioning
confidence: 93%
“…Moreover, the clinical kernel function not only outperformed the linear and polynomial kernel when using the LS-SVM classifier. It also performed well in combination with the regular SVM classifier [42].…”
Section: Results For Clinical Datamentioning
confidence: 93%
“…, where ✓ i are weights of the kernel matrices, is a parameter determining the norm of constraint posed on coe cients (for L 2 , L p -norm MKL, see (Kloft et al, 2009;Yu et al, 2010;) and K i are normalized kernel matrices centered in the Hilbert space. Among other improvements, Yu et al (2010) extended the framework of the MKL in Lanckriet et al (2004) (Lanckriet et al, 2004a) by optimizing various norms in the dual problem of SVMs that allows non-sparse optimal kernel coe cients ✓ ⇤ i .…”
Section: Background and Related Workmentioning
confidence: 99%
“…We classify the images in the above three data sets using the proposed methods by integrating the six types of image features of each of them. We compare the proposed method against several most recent multiple [20]. Besides, we also compare our method to three most recent multi-model image classification methods published in computer vision community, including Gaussian process (GP) method [7], LPBoost-β method [5] and LPBoost-B method [5], which have demonstrated state-of-the-art object categorization performance.…”
Section: Evaluation In Single-label Image Classificationmentioning
confidence: 99%
“…We implement the compared MKL methods using the codes published by [20]. Following [20], in LSSVM ∞ and 2 methods, the regularization parameter λ is estimated jointly as the kernel coefficient of an identity matrix; in LSSVM 1 method, λ is set to 1; in all other SVM approaches, the C parameter of the box constraint is fine tuned in the same range as γ. For LPBoost-β and LPBoost-B methods, we use the codes published by the authors 4 .…”
Section: Evaluation In Single-label Image Classificationmentioning
confidence: 99%