2022
DOI: 10.1016/j.engappai.2022.105331
|View full text |Cite
|
Sign up to set email alerts
|

Semi-supervised Time Series Classification Model with Self-supervised Learning

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
6
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
6
1
1

Relationship

0
8

Authors

Journals

citations
Cited by 23 publications
(6 citation statements)
references
References 20 publications
0
6
0
Order By: Relevance
“…To verify the efficacy of DiffShape in mitigating the issue of lacking labeled samples, we perform classification analyses on time series datasets with only a few labels per class without using unlabeled data. Specifically, we select Supervised, LTS (Grabocka et al 2014), ADSN (Ma et al 2020), SemiTime (Fan et al 2021), SSSTC (Xi et al 2022), MTFC (Wei et al 2023) and TS-TFC (Liu et al 2023b) as baselines. Similar to the previous section, we employ the 12 UCR time series datasets with only 2, 5, and 10 labeled samples per class for analyses.…”
Section: Results On a Few Labeled Time Seriesmentioning
confidence: 99%
See 2 more Smart Citations
“…To verify the efficacy of DiffShape in mitigating the issue of lacking labeled samples, we perform classification analyses on time series datasets with only a few labels per class without using unlabeled data. Specifically, we select Supervised, LTS (Grabocka et al 2014), ADSN (Ma et al 2020), SemiTime (Fan et al 2021), SSSTC (Xi et al 2022), MTFC (Wei et al 2023) and TS-TFC (Liu et al 2023b) as baselines. Similar to the previous section, we employ the 12 UCR time series datasets with only 2, 5, and 10 labeled samples per class for analyses.…”
Section: Results On a Few Labeled Time Seriesmentioning
confidence: 99%
“…Baselines. DiffShape is compared with 10 SSC methods, including Supervised, Pseudo-Label (Lee et al 2013), Temporal Ensembling (TE) (Laine and Aila 2016), LPDeepSSL (Iscen et al 2019), MTL (Jawed, Grabocka, and Schmidt-Thieme 2020), TS-TCC (Eldele et al 2021), SemiTime (Fan et al 2021), SSSTC (Xi et al 2022), MTFC (Wei et al 2023), TS-TFC (Liu et al 2023b). Supervised methods only use labeled data for classification training via cross-entropy.…”
Section: Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…Jawed et al provide a powerful alternative to supervised signals for feature learning by utilizing unlabeled training data through a prediction task, optimizing the multitasking learning approach and model prediction as a secondary task along with the primary task of classification, with a model that has better performance [14]. Xi et al proposed that the past-anchor-future strategy can extract higher-quality semantic context from unlabeled time series data, and that self-supervised temporal relation learning can effectively assist supervised models [38]. Rezaei et al pre-trained the model on a large unlabeled dataset by inputting the time series features of the sampled packets, and then the learned weights were transferred to a small labeled dataset, which has the same accuracy as a fully supervised method with a large labeled dataset [31].…”
Section: Related Workmentioning
confidence: 99%
“…Moreover, the previously described methods were used as a reference to develop novel approaches for time-series which were not specifically designed for medical application, but tested on medical datasets. For example, Gorade et al [59] proposed a BYOL-based non-contrastive large scale time-series representation learning approach via simultaneous bootstrapping of low and high frequency input features; Xi et al [60] introduced a self-supervised temporal relation mining module in their work for semi-supervised time series classification; Cheng et al [61] proposed a subjectaware contrastive learning method for biosignal whose core elements were a subject-specific contrastive loss and an adversarial training to promote subject-invariance during pretraining. Ultimately, Zhang et al [62] developed a contrastive pre-training for time series via Time-Frequency consistency.…”
Section: A Pretext Tasksmentioning
confidence: 99%