2018
DOI: 10.1109/tkde.2018.2820119
|View full text |Cite
|
Sign up to set email alerts
|

Diverse Relevance Feedback for Time Series with Autoencoder Based Summarizations

Abstract: We present a relevance feedback based browsing methodology using different representations for time series data. The outperforming representation type, e.g., among dual-tree complex wavelet transformation, Fourier, symbolic aggregate approximation (SAX), is learned based on user annotations of the presented query results with representation feedback. We present the use of autoencoder type neural networks to summarize time series or its representations into sparse vectors, which serves as another representation… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
4
2

Relationship

0
6

Authors

Journals

citations
Cited by 9 publications
(1 citation statement)
references
References 38 publications
0
1
0
Order By: Relevance
“…The automatic encoder is an unsupervised selflearning mode with three fully connected neural network structures, including the input layer, the hidden layer, and the output layer. The encoder can map the input data from the high-dimensional space into the code in the low-dimensional space, and the decoder reconstructs the output data in the feature layer into the original input data, and obtains the effective features of the input data from the hidden layer through the encoding process and decoding process [16]. In this research, the constructed automatic encoder structure is shown in figure 2.…”
Section: Self-designed Automatic Encodermentioning
confidence: 99%
“…The automatic encoder is an unsupervised selflearning mode with three fully connected neural network structures, including the input layer, the hidden layer, and the output layer. The encoder can map the input data from the high-dimensional space into the code in the low-dimensional space, and the decoder reconstructs the output data in the feature layer into the original input data, and obtains the effective features of the input data from the hidden layer through the encoding process and decoding process [16]. In this research, the constructed automatic encoder structure is shown in figure 2.…”
Section: Self-designed Automatic Encodermentioning
confidence: 99%