2018 IEEE International Conference on Big Data (Big Data) 2018
DOI: 10.1109/bigdata.2018.8621990
|View full text |Cite
|
Sign up to set email alerts
|

Transfer learning for time series classification

Abstract: Transfer learning for deep neural networks is the process of first training a base network on a source dataset, and then transferring the learned features (the network's weights) to a second network to be trained on a target dataset. This idea has been shown to improve deep neural network's generalization capabilities in many computer vision tasks such as image recognition and object localization. Apart from these applications, deep Convolutional Neural Networks (CNNs) have also recently gained popularity in t… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

1
38
0
2

Year Published

2019
2019
2023
2023

Publication Types

Select...
5
2
1

Relationship

0
8

Authors

Journals

citations
Cited by 152 publications
(55 citation statements)
references
References 26 publications
1
38
0
2
Order By: Relevance
“…13 It is widely used in machine translation, [14][15][16] anomaly detection, [17][18][19] and time series classification. [20][21][22] At the same time, it is also very suitable for time series prediction as it processes backward and forward correlations based on memory characteristics. 23,24 However, such algorithms can only process historical data over short periods to make predictions, and have difficulty capturing their long-term dependencies.…”
Section: Introductionmentioning
confidence: 99%
“…13 It is widely used in machine translation, [14][15][16] anomaly detection, [17][18][19] and time series classification. [20][21][22] At the same time, it is also very suitable for time series prediction as it processes backward and forward correlations based on memory characteristics. 23,24 However, such algorithms can only process historical data over short periods to make predictions, and have difficulty capturing their long-term dependencies.…”
Section: Introductionmentioning
confidence: 99%
“…However, it is well-known that DNNs are prone to overfitting, especially when access to a large labeled training dataset is not available. [10,18].…”
Section: Introductionmentioning
confidence: 99%
“…Therefore, we present our results on the UCR 2018 archive separated into two tables. The datasets already contained in the 2015 archive are compared with results from relevant publications and the new datasets are compared to the Dynamic Time Warping (DTW) published on the archives website 10,11 .…”
Section: Methodsmentioning
confidence: 99%