Proceedings of the 2012 SIAM International Conference on Data Mining 2012
DOI: 10.1137/1.9781611972825.27
|View full text |Cite
|
Sign up to set email alerts
|

Transformation Based Ensembles for Time Series Classification

Abstract: Until recently, the vast majority of data mining time series classification (TSC) research has focused on alternative distance measures for 1-Nearest Neighbour (1-NN) classifiers based on either the raw data, or on compressions or smoothing of the raw data. Despite the extensive evidence in favour of 1-NN classifiers with Euclidean or Dynamic Time Warping distance, there has also been a flurry of recent research publications proposing classification algorithms for TSC. Generally, these classifiers describe dif… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

1
81
0

Year Published

2012
2012
2023
2023

Publication Types

Select...
5
2
1

Relationship

1
7

Authors

Journals

citations
Cited by 84 publications
(82 citation statements)
references
References 21 publications
1
81
0
Order By: Relevance
“…Table 5 shows the accuracy of three classifiers trained with 9 of the 26 datasets. The classifiers are: an SVM built on selected shapelets (results comparable with Table 4), Dynamic Time Warping with 1-NN on the raw data and an ensemble of 1-NN classifiers built on transformations into the power spectrum, autocorrelation function and principle component space (described in [1]). These data sets were selected as they are common to both papers.…”
Section: Other Classifiersmentioning
confidence: 99%
See 2 more Smart Citations
“…Table 5 shows the accuracy of three classifiers trained with 9 of the 26 datasets. The classifiers are: an SVM built on selected shapelets (results comparable with Table 4), Dynamic Time Warping with 1-NN on the raw data and an ensemble of 1-NN classifiers built on transformations into the power spectrum, autocorrelation function and principle component space (described in [1]). These data sets were selected as they are common to both papers.…”
Section: Other Classifiersmentioning
confidence: 99%
“…In [1] we show that transforming a TSC problem into an alternative data space prior to classification can provide a greater level of improvement than developing classifier refinements. Hence, we propose a shapelet transform that creates a new classification data set independently of the classifier.…”
Section: Introductionmentioning
confidence: 98%
See 1 more Smart Citation
“…A recent study notes "transforming the data is the simplest way of achieving improvement in problems where the discriminating features are based on similarity in change and similarity in shape" [5]. Following this principle, we extract all time series subsequences of different lengths from each dimension of the multivariate time series.…”
Section: A Methods Sketchmentioning
confidence: 99%
“…Many researches depended on reducing the size of datasets [25], [26], [27], and [28] when it applied KNN, as it has to use all the training examples on each test which increases execution time with large datasets [29][30] [31]. Bagnal et al [32] determined that 1-NN with an elastic measure such as DTW is the best approach for smaller data sets, but that as the number of series increases "the accuracy of elastic measures converges with that of Euclidean distance". Euclidian distance calculation is the square of the difference between two points and that means the value's sign are lost therefore we developed DTWDir which considers the absolute distance value and also its sign.…”
mentioning
confidence: 99%