2020 IEEE 32nd International Conference on Tools With Artificial Intelligence (ICTAI) 2020
DOI: 10.1109/ictai50040.2020.00163
|View full text |Cite
|
Sign up to set email alerts
|

Time Series Averaging Using Multi-Tasking Autoencoder

Abstract: The estimation of an optimal time series average has been studied for over three decades. The process is mainly challenging due to temporal distortion. Previous approaches mostly addressed this challenge by using alignment algorithms such as Dynamic Time Warping (DTW). However, the quadratic computational complexity of DTW and its inability to align more than two time series simultaneously complicate the estimation. In this paper, we follow a different path and state the averaging problem as a generative probl… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
7
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
5
1
1

Relationship

2
5

Authors

Journals

citations
Cited by 10 publications
(7 citation statements)
references
References 16 publications
(53 reference statements)
0
7
0
Order By: Relevance
“…CNN models are typically employed to analyze spatial or multidimensional data. However, one-dimensional CNN (1D CNN) can also be used to analyze texts and timeseries data [16]; 1D CNN can extract salient and representative features of time-series data by performing 1D convolution operations using multiple filters [17]. Figure 3 shows the difference between 1D CNN and 2D CNN.…”
Section: One Dimensional Cnn Modelmentioning
confidence: 99%
“…CNN models are typically employed to analyze spatial or multidimensional data. However, one-dimensional CNN (1D CNN) can also be used to analyze texts and timeseries data [16]; 1D CNN can extract salient and representative features of time-series data by performing 1D convolution operations using multiple filters [17]. Figure 3 shows the difference between 1D CNN and 2D CNN.…”
Section: One Dimensional Cnn Modelmentioning
confidence: 99%
“…We propose to overcome this limitation by utilizing a multitasking autoencoder, which is set to perform multi-class classification and reconstruction. This configuration was proposed in [18], in order to estimate the averages of multi-class temporal datasets from their latent embedding. With this in mind, we will next present the methodology used.…”
Section: A Brief Review Of Time Series Clustering Techniquesmentioning
confidence: 99%
“…In general, the multitasking setup optimizes for reconstruction and the multi-class classification losses given in (5), where X i , Xi are input and reconstructed time series in R N . Moreover, p i,j are the softmax activation values (the likelihood) of a series X i belonging to category (Cat) [18]. In terms of layer arrangements, the multitasking network is constructed from transposed and normal convolutional, max-pooling, flattening, and dense layers [25].…”
Section: Proposed Network Architecturementioning
confidence: 99%
See 2 more Smart Citations