2019
DOI: 10.1007/s10115-019-01389-4
|View full text |Cite
|
Sign up to set email alerts
|

Locally and globally explainable time series tweaking

Abstract: Time series classification has received great attention over the past decade with a wide range of methods focusing on predictive performance by exploiting various types of temporal features. Nonetheless, little emphasis has been placed on interpretability and explainability. In this paper, we formulate the novel problem of explainable time series tweaking, where, given a time series and an opaque classifier that provides a particular classification decision for the time series, we want to find the changes to b… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
12
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
5
2
1

Relationship

0
8

Authors

Journals

citations
Cited by 22 publications
(12 citation statements)
references
References 38 publications
0
12
0
Order By: Relevance
“…Model-agnostic explanation models are typically based on decision trees, rules or feature importance ( Guidotti et al, 2018 ; Freitas, 2014 ; Craven & Shavlik, 1995 ), because of the simplicity of such explanations. Several model-specific and data-specific explanation models have also been developed, e.g., for deep neural networks ( Binder et al, 2016 ; Selvaraju et al, 2019 ), deep relational machines ( Srinivasan, Vig & Bain, 2019 ), time series ( Karlsson et al, 2019 ), multi-labelled and ontology-linked data ( Panigutti, Perotti & Pedreschi, 2020 ) or logic problems ( Biecek, 2018 ); software toolkits including the implementation of various XAI algorithms have been also introduced ( Arya et al, 2019 ). A comprehensive survey of explainability methods can be found in Guidotti et al (2018) and in Došilović, Brčić & Hlupić (2018) .…”
Section: Introductionmentioning
confidence: 99%
“…Model-agnostic explanation models are typically based on decision trees, rules or feature importance ( Guidotti et al, 2018 ; Freitas, 2014 ; Craven & Shavlik, 1995 ), because of the simplicity of such explanations. Several model-specific and data-specific explanation models have also been developed, e.g., for deep neural networks ( Binder et al, 2016 ; Selvaraju et al, 2019 ), deep relational machines ( Srinivasan, Vig & Bain, 2019 ), time series ( Karlsson et al, 2019 ), multi-labelled and ontology-linked data ( Panigutti, Perotti & Pedreschi, 2020 ) or logic problems ( Biecek, 2018 ); software toolkits including the implementation of various XAI algorithms have been also introduced ( Arya et al, 2019 ). A comprehensive survey of explainability methods can be found in Guidotti et al (2018) and in Došilović, Brčić & Hlupić (2018) .…”
Section: Introductionmentioning
confidence: 99%
“…The created simple model regularly seeks to optimize its similarity to the original function while minimizing the complexity and achieving comparable performance to the original model. There are two methods that can be considered subsets of the simplification approach, which include the local explanation (Neto and Paulovich 2020;Karlsson et al 2020) and the example generation (Chen et al 2021). The local explanation assumes that simplification can be achieved by segmenting the solution space into smaller segments and performing an analysis on the particular segments of a model.…”
Section: Simplificationmentioning
confidence: 99%
“…Moreover, instance-or example-based explanations can be more easily interpreted by a non-expert person [31]. These methods explain a prediction on a single instance by comparing it to another real or generated example, e.g., the most typical examplar of the observed phenomenon (a prototype [16]) or a contrastive examplar related to a distinct behaviour (a counterfactual [2, 14,18]). For time series classifiers, counterfactuals can be generated by swapping the values of the most discriminative dimensions with those from another training instance [2].…”
Section: Related Workmentioning
confidence: 99%
“…The Native Guide algorithm [14] does not suffer from the previous issue but uses a perturbation mechanism on the Nearest Unlike Neighbor in the training set using the model's internal feature vector. Lastly, for a k-NN and a Random Shapelet Forest classifiers, [18] design a tweaking mechanism to produce counterfactual time series.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation