2021
DOI: 10.3390/s21217373
|View full text |Cite
|
Sign up to set email alerts
|

TSInsight: A Local-Global Attribution Framework for Interpretability in Time Series Data

Abstract: With the rise in the employment of deep learning methods in safety-critical scenarios, interpretability is more essential than ever before. Although many different directions regarding interpretability have been explored for visual modalities, time series data has been neglected, with only a handful of methods tested due to their poor intelligibility. We approach the problem of interpretability in a novel way by proposing TSInsight, where we attach an auto-encoder to the classifier with a sparsity-inducing nor… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
4
2
1

Relationship

0
7

Authors

Journals

citations
Cited by 7 publications
(3 citation statements)
references
References 22 publications
0
3
0
Order By: Relevance
“…The time component impedes the usage of existing methods (Ismail et al (2020)). Thus, increasing effort is put into adapting existing methods to time series (e.g., LEFTIST based on SHAP / Lime (Guilleme et al (2019)), Temporal Saliency Rescaling for Saliency Methods (Ismail et al (2020)), Counterfactuals (Ates et al (2021); Delaney et al (2021))), and developing new methods specifically for time series interpretability (e.g., TSInsight based on autoencoders (Siddiqui et al (2021)), TSViz for interpreting CNN (Siddiqui et al (2019))). For a survey of time series interpretability, please refer to Rojat et al (2021).…”
Section: Need For Interpretability In Time Series Classificationmentioning
confidence: 99%
See 1 more Smart Citation
“…The time component impedes the usage of existing methods (Ismail et al (2020)). Thus, increasing effort is put into adapting existing methods to time series (e.g., LEFTIST based on SHAP / Lime (Guilleme et al (2019)), Temporal Saliency Rescaling for Saliency Methods (Ismail et al (2020)), Counterfactuals (Ates et al (2021); Delaney et al (2021))), and developing new methods specifically for time series interpretability (e.g., TSInsight based on autoencoders (Siddiqui et al (2021)), TSViz for interpreting CNN (Siddiqui et al (2019))). For a survey of time series interpretability, please refer to Rojat et al (2021).…”
Section: Need For Interpretability In Time Series Classificationmentioning
confidence: 99%
“…Various interpretability methods for time series classification are available (Rojat et al (2021)). However, the usage of those methods is not yet standardized: The proposed methods often lack a) open code (e.g., Siddiqui et al (2021)), b) an easy-to-use interface (e.g., Ismail et al (2020)), or c) visualization (e.g., Guilleme et al (2019)), making the application of those methods inconvenient and thereby hindering the usage of deep learning methods on safety-critical scenarios.…”
Section: Introductionmentioning
confidence: 99%
“…Siddiqui et al [30] propose with TSinsight an attribution technique but also use multiple line plots to show the attribution as a line plot. Thus, they move the visualization towards small multiples in the form of multiple line plots.…”
Section: Line Plots With Attribution Extensionsmentioning
confidence: 99%