2012
DOI: 10.1561/2200000036
|View full text |Cite
|
Sign up to set email alerts
|

Kernels for Vector-Valued Functions: A Review

Abstract: Kernel methods are among the most popular techniques in machine learning. From a frequentist/discriminative perspective they play a central role in regularization theory as they provide a natural choice for the hypotheses space and the regularization functional through the notion of reproducing kernel Hilbert spaces. From a Bayesian/generative perspective they are the key in the context of Gaussian processes, where the kernel function is also known as the covariance function. Traditionally, kernel methods have… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
190
0

Year Published

2013
2013
2024
2024

Publication Types

Select...
7
2

Relationship

0
9

Authors

Journals

citations
Cited by 405 publications
(190 citation statements)
references
References 57 publications
0
190
0
Order By: Relevance
“…With the assumption of a multivariate distribution on class labels, we propose using MMD [18] to constrain the mismatch in the marginal distributions. Following a geometric intuition described in [31], we assume the conditional probability distributions should be similar if the marginal distributions are shifted close to each other. Therefore, we propose learning a vector-valued function in the RKHS that can give a best classification performance on the target domain data, where we put an MMD regularization on the parametric input kernel estimation and adopt output kernel estimation approach presented in [17].…”
Section: Proposed Methodsmentioning
confidence: 99%
“…With the assumption of a multivariate distribution on class labels, we propose using MMD [18] to constrain the mismatch in the marginal distributions. Following a geometric intuition described in [31], we assume the conditional probability distributions should be similar if the marginal distributions are shifted close to each other. Therefore, we propose learning a vector-valued function in the RKHS that can give a best classification performance on the target domain data, where we put an MMD regularization on the parametric input kernel estimation and adopt output kernel estimation approach presented in [17].…”
Section: Proposed Methodsmentioning
confidence: 99%
“…Recently, multi-task learning has been a topic of substantial research in the area of kernel methods (see e.g. [25], [26], [27]). …”
Section: Related Workmentioning
confidence: 99%
“…[29], [30], [3]), as well as a standard building block in more theoretical work concerning the development of multi-task learning methods (see e.g. [25], [31], [27]). Concerning specifically the zero-shot learning setting, Kronecker product kernel methods have been shown to outperform a variety of baseline methods in areas such as recommender systems [11], drug-target prediction [3] and image categorization [6].…”
Section: Related Workmentioning
confidence: 99%
“…Let M m be the row index set and N n represent the column index set so that J = M×N. We use the notation MGP (φ, C N , C M ) to denote the MV-GP with mean function φ : M × N → R, row covariance function C M : M×M → R and the column covariance function C N : N × N → R. The covariance function of the MV-GP has a product structure [29], so the prior covariance between matrix entries can be decomposed as the product of the row and column covariances, C ((m, n), (m , n )) = C M (m, m )C N (n, n ).…”
Section: Constrained Gaussian Process Regression For Transposablementioning
confidence: 99%