2017
DOI: 10.1007/978-3-319-59126-1_35
|View full text |Cite
|
Sign up to set email alerts
|

Deep Kernelized Autoencoders

Abstract: Abstract. In this paper we introduce the deep kernelized autoencoder, a neural network model that allows an explicit approximation of (i) the mapping from an input space to an arbitrary, user-specified kernel space and (ii) the back-projection from such a kernel space to input space. The proposed method is based on traditional autoencoders and is trained through a new unsupervised loss function. During training, we optimize both the reconstruction accuracy of input samples and the alignment between a kernel ma… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
14
0

Year Published

2017
2017
2024
2024

Publication Types

Select...
3
2
2

Relationship

1
6

Authors

Journals

citations
Cited by 11 publications
(14 citation statements)
references
References 20 publications
0
14
0
Order By: Relevance
“…In this work the measure of this loss function is the cross entropy score. Then the encoder F and decoder G functions can be defined as [22] …”
Section: Methodsmentioning
confidence: 99%
“…In this work the measure of this loss function is the cross entropy score. Then the encoder F and decoder G functions can be defined as [22] …”
Section: Methodsmentioning
confidence: 99%
“…Learning compressed representations of MTS with missing data. To handle missing data effectively, and, at the same time, avoid the undesired biases introduced by imputation, we propose a kernel alignment procedure [20] that matches the dot product matrix of the learned representations with a kernel matrix. Specifically, we exploit the recentlyproposed Time series Cluster Kernel (TCK) [21], which computes similarities between MTS with missing values without using imputation.…”
Section: Contributionsmentioning
confidence: 99%
“…However, imputation injects biases in the data that may negatively affect the quality of the representations and conceal potentially useful information contained in the missingness patterns. To overcome these shortcomings, we introduce a kernel alignment procedure [20] that allows us to preserve the pairwise similarities of the inputs in the learned representations. These pairwise similarities are encoded in a positive semi-definite matrix K that is defined by the designer and passed as input to the model.…”
Section: A C C E P T E D Mmentioning
confidence: 99%
“…With this approach, extensions for t-SNE [39] and other classic manifold learning methods [40,41] were developed, which enable the computation of OOS solutions. A particularly interesting realization of this approach are deep kernelized autoencoders [42], which train an autoencoder network with an additional objective to not only minimize the reconstruction error of the data points themselves but also the mismatch between the dot product of a batch of embedding vectors and the corresponding block from a kernel matrix. The decoder part of the autoencoder network thereby also provides a mapping from the embedding space back to the original feature space, which can be used to compute the pre-image of an embedding vector [10].…”
Section: Related Workmentioning
confidence: 99%
“…Similarly, in addition to a last layer W l , the SimEc network can also be extended by a mirrored version of f (x i ), thereby adding a decoder part to the network, which can be used to compute the pre-image of an embedding like in the deep kernelized autoencoder networks [42].…”
Section: 21mentioning
confidence: 99%