2011
DOI: 10.1007/978-3-642-20847-8_13
|View full text |Cite
|
Sign up to set email alerts
|

INSIGHT: Efficient and Effective Instance Selection for Time-Series Classification

Abstract: Abstract. Time-series classification is a widely examined data mining task with various scientific and industrial applications. Recent research in this domain has shown that the simple nearest-neighbor classifier using Dynamic Time Warping (DTW) as distance measure performs exceptionally well, in most cases outperforming more advanced classification algorithms. Instance selection is a commonly applied approach for improving efficiency of nearest-neighbor classifier with respect to classification time. This app… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

1
34
0

Year Published

2011
2011
2016
2016

Publication Types

Select...
5
2
1

Relationship

3
5

Authors

Journals

citations
Cited by 46 publications
(35 citation statements)
references
References 18 publications
1
34
0
Order By: Relevance
“…In this paper we exploit hubness for instance selection for time series classification algorithms. In our previous work [35] we proved that instance selection is an NP-complete problem and discussed coverage of the selected instances. Here, in contrast, we focus on hub-based instance selection for electrocardiography.…”
Section: Related Workmentioning
confidence: 99%
“…In this paper we exploit hubness for instance selection for time series classification algorithms. In our previous work [35] we proved that instance selection is an NP-complete problem and discussed coverage of the selected instances. Here, in contrast, we focus on hub-based instance selection for electrocardiography.…”
Section: Related Workmentioning
confidence: 99%
“…limiting the warping window size), indexing and reducing the length of the time series used. For more details we refer to [32] and the references therein. On the other hand, we note that distances between different pairs of training instances can be calculated independently, therefore, computations can be parallelized and implemented on a distributed supercomputer (cloud).…”
Section: On the Computational Aspects Of The Implementation Of Hubnesmentioning
confidence: 99%
“…We refer to this phenomenon as the presence of hubs or hubness for short, and the classifiers that take this phenomenon into account are called hubness-aware classifiers. Hubness-aware classifiers were originally proposed for vector data and image data [29], [30], [31], and only few works considered hubness-aware classification of time series [32], [33], [34], but none of them considered hubnessaware classifiers for EEG data.…”
Section: Introductionmentioning
confidence: 99%
“…[5], [12], [13], [15], [22], [25], [26], [20] and [21] for a sur-vey. These algorithms try to recognize bad hubs and reduce their influence on classifications of unlabeled instances.…”
Section: Introductionmentioning
confidence: 99%