2022
DOI: 10.1016/j.knosys.2022.108606
|View full text |Cite
|
Sign up to set email alerts
|

TimeCLR: A self-supervised contrastive learning framework for univariate time series representation

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
13
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
2
2

Relationship

0
8

Authors

Journals

citations
Cited by 35 publications
(16 citation statements)
references
References 22 publications
0
13
0
Order By: Relevance
“…Recently, in time series pre-training, many designs of positive and negative pairs have been proposed by utilizing the invariant properties of time series. Concretely, to make the representation learning seamlessly related to temporal variations, TimCLR (Yang et al, 2022) adopts the DTW (Mueen & Keogh, 2016) to generate phase-shift and amplitudechange augmentations, which is more suitable for time series. TS2Vec (Yue et al, 2022) splits multiple time series into several patches and further defines the contrastive loss in both instance-wise and patch-wise aspects.…”
Section: Related Workmentioning
confidence: 99%
“…Recently, in time series pre-training, many designs of positive and negative pairs have been proposed by utilizing the invariant properties of time series. Concretely, to make the representation learning seamlessly related to temporal variations, TimCLR (Yang et al, 2022) adopts the DTW (Mueen & Keogh, 2016) to generate phase-shift and amplitudechange augmentations, which is more suitable for time series. TS2Vec (Yue et al, 2022) splits multiple time series into several patches and further defines the contrastive loss in both instance-wise and patch-wise aspects.…”
Section: Related Workmentioning
confidence: 99%
“…CoST (Woo et al, 2022) utilizes both time domain and frequency domain contrastive losses to learn disentangled seasonal-trend representations of TS. TimeCLR (Yang et al, 2022) introduces phase-shift and amplitude change augmentations, which are data augmentation methods based on DTW. TF-C (Zhang et al, 2022) learns both time-and frequencybased representations of TS and proposes a novel time-frequency consistency architecture.…”
Section: Related Workmentioning
confidence: 99%
“…We conduct experiments on transfer learning for classification in in-domain and cross-domain settings which are used in previous works (Zhang et al, 2022;Eldele et al, 2021;Dong et al, 2023), by adopting our SoftCLT to TS-TCC and CA-TCC. As baseline methods, we consider TS-SD (Shi et al, 2021), TS2Vec (Yue et al, 2022), Mixing-Up (Wickstrøm et al, 2022), CLOCS (Kiyasseh et al, 2021), CoST (Woo et al, 2022), LaST (Wang et al, 2022), TF-C (Zhang et al, 2022), TS-TCC (Eldele et al, 2021), TST (Zerveas et al, 2021) and SimMTM (Dong et al, 2023).…”
Section: Transfer Learningmentioning
confidence: 99%
See 1 more Smart Citation
“…The existing methods can be summarized in the below groups. JPEG compression artifacts, 12,13 edge inconsistencies, 14 color consistency, 15 visual similarity, 9,16 EXIF inconsistency, 17 camera model 18,19 and noise-pattern. [20][21][22] Specifically, most current methods focus on mining contextual features in all kinds of ways between images, ignoring the potential context relation information within images.…”
Section: Introductionmentioning
confidence: 99%