2018 IEEE International Conference on Data Mining (ICDM) 2018
DOI: 10.1109/icdm.2018.00036
|View full text |Cite
|
Sign up to set email alerts
|

Explainable Time Series Tweaking via Irreversible and Reversible Temporal Transformations

Abstract: Time series classification has received great attention over the past decade with a wide range of methods focusing on predictive performance by exploiting various types of temporal features. Nonetheless, little emphasis has been placed on interpretability and explainability. In this paper, we formulate the novel problem of explainable time series tweaking, where, given a time series and an opaque classifier that provides a particular classification decision for the time series, we want to find the minimum numb… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
14
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
4
1
1

Relationship

1
5

Authors

Journals

citations
Cited by 24 publications
(15 citation statements)
references
References 35 publications
(46 reference statements)
0
14
0
Order By: Relevance
“…Wachter et al are among the first to use the term "counterfactual" for ML explanations [25]. Counterfactual explanations have also been used in the domains of image [26], document [27] and univariate time series [28] classification.…”
Section: Related Workmentioning
confidence: 99%
“…Wachter et al are among the first to use the term "counterfactual" for ML explanations [25]. Counterfactual explanations have also been used in the domains of image [26], document [27] and univariate time series [28] classification.…”
Section: Related Workmentioning
confidence: 99%
“…We define the problem of global explainable time series tweaking for the k-nearest neighbor classifier 2 and present a simple solution to tackle this problem. Eventually, we show that our algorithm for finding a transformation T for the k-nearest neighbor classifier is a generalization of the 1-nearest neighbor approach presented by Karlsson et al [21].…”
Section: Global Tweaking: K-nearest Neighbormentioning
confidence: 76%
“…Note that for k = 1, the algorithm proposed here is equivalent to the baseline algorithm for the 1-nearest neighbor proposed by Karlsson et al [21]. The explanation to this is simply that for the case of 1-nearest neighbor, where, by definition, k = 1, we have that C = |X|.…”
Section: Global Tweaking: K-nearest Neighbormentioning
confidence: 96%
See 2 more Smart Citations