Proceedings of the 27th ACM SIGKDD Conference on Knowledge Discovery &Amp; Data Mining 2021
DOI: 10.1145/3447548.3467401
|View full text |Cite
|
Sign up to set email alerts
|

A Transformer-based Framework for Multivariate Time Series Representation Learning

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

2
196
1
2

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
4
1

Relationship

0
9

Authors

Journals

citations
Cited by 468 publications
(284 citation statements)
references
References 2 publications
2
196
1
2
Order By: Relevance
“…Similarly, variational autoencoders (VAEs), RNNs [38] and generative adversarial networks (GANs) [35] have been successfully applied for synthetic offline handwriting generation, but not for the online case so far. For the time-series classification task, standard convolutional architectures [34,42,77,108,108,117], spatio-temporal methods [6,21,30,47,48] and variants [33,93,95,104] as well as Transformers [26,114] have been employed. While many approaches predict one class after the other, [20,63] predicted sequences similar to our approach.…”
Section: Methodsmentioning
confidence: 99%
“…Similarly, variational autoencoders (VAEs), RNNs [38] and generative adversarial networks (GANs) [35] have been successfully applied for synthetic offline handwriting generation, but not for the online case so far. For the time-series classification task, standard convolutional architectures [34,42,77,108,108,117], spatio-temporal methods [6,21,30,47,48] and variants [33,93,95,104] as well as Transformers [26,114] have been employed. While many approaches predict one class after the other, [20,63] predicted sequences similar to our approach.…”
Section: Methodsmentioning
confidence: 99%
“…While recent work in time series representation learning focused on various aspects of representation learning such how to sample contrastive pairs (Franceschi et al, 2020;Tonekaboni et al, 2021), taking a Transformer based approach (Zerveas et al, 2021), exploring complex contrastive learning tasks (Eldele et al, 2021), as well as constructing temporally hierarchical representations (Yue et al, 2021), none have touched upon learning representations composed of trend and seasonal features. Whereas existing work have focused exclusively on time series classification tasks, Yue et al (2021) first showed that time series representations learned via contrastive learning establishes a new stateof-the-art performance on deep forecasting benchmarks.…”
Section: Related Workmentioning
confidence: 99%
“…TST (Zerveas et al, 2021) TST is a Transformer based approach using a reconstruction loss. We use the open source implementation 9 as is.…”
Section: E Details On Baselinesmentioning
confidence: 99%
“…We adopt the time series classification task to evaluate the model performance for temporal sequences. We select 10 multivariate datasets from UEA Time Series Classification Archive (Bagnall et al, 2018) for experiments and follow the data pre-processing in (Zerveas et al, 2021). We use 2 layers for Transformer-based models with 512 hidden channels and 8 heads for the attention mechanism.…”
Section: Time Series Classificationmentioning
confidence: 99%