2022
DOI: 10.3390/math10122026
|View full text |Cite
|
Sign up to set email alerts
|

Kernel Matrix-Based Heuristic Multiple Kernel Learning

Abstract: Kernel theory is a demonstrated tool that has made its way into nearly all areas of machine learning. However, a serious limitation of kernel methods is knowing which kernel is needed in practice. Multiple kernel learning (MKL) is an attempt to learn a new tailored kernel through the aggregation of a set of valid known kernels. There are generally three approaches to MKL: fixed rules, heuristics, and optimization. Optimization is the most popular; however, a shortcoming of most optimization approaches is that … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
3
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
4

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(3 citation statements)
references
References 29 publications
(26 reference statements)
0
3
0
Order By: Relevance
“…Distance metric learning is a key aspect of metric learning, aiming to learn a function that can measure the distance between samples. Various methods exist for distance metric learning, including prototype-based methods (Gu et al, 2022b ), metric matrix-based methods (Price et al, 2022 ), and maximum margin-based methods (Li X. et al, 2022 ). Among these, Max-Margin Metric Learning (MMML) has emerged as a classic technique maximizing the distances between different classes while minimizing the distances within the same class.…”
Section: Related Workmentioning
confidence: 99%
“…Distance metric learning is a key aspect of metric learning, aiming to learn a function that can measure the distance between samples. Various methods exist for distance metric learning, including prototype-based methods (Gu et al, 2022b ), metric matrix-based methods (Price et al, 2022 ), and maximum margin-based methods (Li X. et al, 2022 ). Among these, Max-Margin Metric Learning (MMML) has emerged as a classic technique maximizing the distances between different classes while minimizing the distances within the same class.…”
Section: Related Workmentioning
confidence: 99%
“…Jiang et al [23] proposed a high-order norm-product regularized multiple kernel learning framework to optimize the discrimination performance. Stanton R et al [24] explored different divergence measures on the values in the kernel matrices and reproducing kernel Hilbert space (RKHS). Fatemeh and Sattar [25] formulated multiple kernel learning in a bi-level learning paradigm consisting of the kernel combination weight learning (KWL) stage and the self-paced learning (SPL) stage.…”
Section: Related Workmentioning
confidence: 99%
“…Even though multi-task learning improves accuracy performance, a similarity measure between tasks is highly www.ijacsa.thesai.org required, which is not a priority for many diseases [18]- [19]. The utilization of transcriptomics subjects possesses diverse challenges in interpretation.…”
Section: Introductionmentioning
confidence: 99%