2021
DOI: 10.48550/arxiv.2106.14112
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Time-Series Representation Learning via Temporal and Contextual Contrasting

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
31
1

Year Published

2022
2022
2023
2023

Publication Types

Select...
4
2
1

Relationship

0
7

Authors

Journals

citations
Cited by 13 publications
(32 citation statements)
references
References 15 publications
0
31
1
Order By: Relevance
“…For example, CLOCS defines adjacent segments of a time series as positive pairs [40], and TNC assumes overlapping neighborhoods of time series should have similar representations [45]. These methods leverage temporal invariance to define positive pairs which are used to calculate contrastive loss, but other invariances, such as transformation invariance (e.g., SimCLR [39]), contextual invariance (e.g., TS2vec [46] and TS-TCC [47]) and augmentations are possible. In this work, we propose an augmentation bank that exploits multiple invariances to generate diverse augmentations (Sec.…”
Section: Related Workmentioning
confidence: 99%
“…For example, CLOCS defines adjacent segments of a time series as positive pairs [40], and TNC assumes overlapping neighborhoods of time series should have similar representations [45]. These methods leverage temporal invariance to define positive pairs which are used to calculate contrastive loss, but other invariances, such as transformation invariance (e.g., SimCLR [39]), contextual invariance (e.g., TS2vec [46] and TS-TCC [47]) and augmentations are possible. In this work, we propose an augmentation bank that exploits multiple invariances to generate diverse augmentations (Sec.…”
Section: Related Workmentioning
confidence: 99%
“…Unsupervised representation learning for time series A relevant direction of research about representation learning on sequence data has been well-studied [Chung et al, 2015, Fraccaro et al, 2016, Krishnan et al, 2017, Bayer et al, 2021. However, few efforts have made in unsupervised representation learning for time series [Längkvist et al, 2014, Eldele et al, 2021a, Yue et al, 2021a. Applying auto-encoders [Choi et al, 2016] and seq-to-seq models [Malhotra et al, 2017, Lyu et al, 2018 with an encoder-decoder architecture to reconstruct the input are preliminary approaches to unsupervised representation learning for time series.…”
Section: Related Workmentioning
confidence: 99%
“…Contrastive Predictive Coding (CPC) [Oord et al, 2018] conducts representation learning by using powerful autoregressive models in latent space to make predictions in the future, relying on Noise-Contrastive Estimation [Gutmann and Hyvärinen, 2010] for the loss function in similar ways. Temporal and Contextual Contrasting (TS-TCC) [Eldele et al, 2021a] is a improved work of CPC and learns robust representation by a harder prediction task against perturbations introduced by different timestamps and augmentations. Temporal Neighborhood Coding (TNC) [Tonekaboni et al, 2021] presents a novel neighborhood-based unsupervised learning framework and applies sample weight adjustment for non-stationary multivariate time series.…”
Section: Related Workmentioning
confidence: 99%
See 2 more Smart Citations