2022
DOI: 10.48550/arxiv.2204.11060
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Dimension Reduction for time series with Variational AutoEncoders

Abstract: In this work, we explore dimensionality reduction techniques for univariate and multivariate time series data. We especially conduct a comparison between wavelet decomposition and convolutional variational autoencoders for dimension reduction. We show that variational autoencoders are a good option for reducing the dimension of high dimensional data like ECG. We make these comparisons on a real world, publicly available, ECG dataset that has lots of variability and use the reconstruction error as the metric. W… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2024
2024
2024
2024

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(1 citation statement)
references
References 14 publications
0
1
0
Order By: Relevance
“…2) Variational Autoencoders (VAEs): Variational Autoencoders (VAEs) are deep generative models that learn latent representations of time series data, used to generate new time series samples by sampling from the learned latent space [49], [50]. In a VAE, the encoder network maps the input time series data to a latent space where each point represents a potential data point, and the decoder network generates time series samples from points in the latent space [51], [52].…”
Section: Data Expansion Techniquesmentioning
confidence: 99%
“…2) Variational Autoencoders (VAEs): Variational Autoencoders (VAEs) are deep generative models that learn latent representations of time series data, used to generate new time series samples by sampling from the learned latent space [49], [50]. In a VAE, the encoder network maps the input time series data to a latent space where each point represents a potential data point, and the decoder network generates time series samples from points in the latent space [51], [52].…”
Section: Data Expansion Techniquesmentioning
confidence: 99%