2022
DOI: 10.48550/arxiv.2204.11291
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Large Scale Time-Series Representation Learning via Simultaneous Low and High Frequency Feature Bootstrapping

Abstract: Learning representation from unlabeled time series data is a challenging problem. Most existing self-supervised and unsupervised approaches in the time-series domain do not capture low and high frequency features at the same time. Further some of these methods employ large scale models like transformers or rely on computationally expensive techniques such as contrastive learning. To tackle these problems, we propose a non-contrastive self-supervised learning approach which efficiently captures low and high fre… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
1
0

Year Published

2023
2023
2023
2023

Publication Types

Select...
1
1

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(2 citation statements)
references
References 16 publications
0
1
0
Order By: Relevance
“…Gorade et al . [76] proposed a BYOL-based approach based on the combination of two different sets of projector plus predictor designed to extract, respectively, low-and highfrequency characteristic features from the embedding. Zhang et al .…”
Section: Novel Methods For Time-series Datamentioning
confidence: 99%
“…Gorade et al . [76] proposed a BYOL-based approach based on the combination of two different sets of projector plus predictor designed to extract, respectively, low-and highfrequency characteristic features from the embedding. Zhang et al .…”
Section: Novel Methods For Time-series Datamentioning
confidence: 99%
“…Moreover, the previously described methods were used as a reference to develop novel approaches for time-series which were not specifically designed for medical application, but tested on medical datasets. For example, Gorade et al [59] proposed a BYOL-based non-contrastive large scale time-series representation learning approach via simultaneous bootstrapping of low and high frequency input features; Xi et al [60] introduced a self-supervised temporal relation mining module in their work for semi-supervised time series classification; Cheng et al [61] proposed a subjectaware contrastive learning method for biosignal whose core elements were a subject-specific contrastive loss and an adversarial training to promote subject-invariance during pretraining. Ultimately, Zhang et al [62] developed a contrastive pre-training for time series via Time-Frequency consistency.…”
Section: A Pretext Tasksmentioning
confidence: 99%