2023
DOI: 10.32473/flairs.36.133107
|View full text |Cite
|
Sign up to set email alerts
|

Making Time Series Embeddings More Interpretable in Deep Learning

Abstract: With the success of language models in deep learning, multiple new time series embeddings have been proposed. However, the interpretability of those representations is often still lacking compared to word embeddings. This paper tackles this issue, aiming to present some criteria for making time series embeddings applied in deep learning models more interpretable using higher-level features in symbolic form. For that, we investigate two different approaches for extracting symbolic approximation representations … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
0
0

Year Published

2023
2023
2023
2023

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(5 citation statements)
references
References 47 publications
0
0
0
Order By: Relevance
“…Therefore, we aim to use more symbolic embedding approaches, like Symbolic Aggregate approXimation (SAX) (Lin et al 2003) or Symbolic Fourier Approximation (SFA) (Schäfer and Högqvist 2012). SAX, in particular, has been successfully applied as a symbolic embedding for deep learning (Lavangnananda and Sawasdimongkol 2012;Schwenke and Atzmueller 2021c;2021b;Criado-Ramón, Ruiz, and Pegalajar 2022;Tabassum, Menon, and Jastrzebska 2022), via a more human related representation, cf. Atzmueller et al (2017); Ramirez, Wimmer, and Atzmueller (2019).…”
Section: Symbolic Time Series Embeddingsmentioning
confidence: 99%
See 4 more Smart Citations
“…Therefore, we aim to use more symbolic embedding approaches, like Symbolic Aggregate approXimation (SAX) (Lin et al 2003) or Symbolic Fourier Approximation (SFA) (Schäfer and Högqvist 2012). SAX, in particular, has been successfully applied as a symbolic embedding for deep learning (Lavangnananda and Sawasdimongkol 2012;Schwenke and Atzmueller 2021c;2021b;Criado-Ramón, Ruiz, and Pegalajar 2022;Tabassum, Menon, and Jastrzebska 2022), via a more human related representation, cf. Atzmueller et al (2017); Ramirez, Wimmer, and Atzmueller (2019).…”
Section: Symbolic Time Series Embeddingsmentioning
confidence: 99%
“…Transformers are especially interesting for time series data due to their ability to handle long-term dependencies (Li et al 2019), while also being able to act as data encoder, e. g., the original encoderdecoder application. Additionally, the introduction of attention with the MHA enables further explainable methods (Vig 2019;Škrlj et al 2020;Schwenke and Atzmueller 2021c;2021b). Here, we utilize the Transformer architecture as a foundation of our attention-based abstraction approach.…”
Section: Transformer Architecturementioning
confidence: 99%
See 3 more Smart Citations