2018
DOI: 10.48550/arxiv.1803.07870
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Reservoir computing approaches for representation and classification of multivariate time series

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
12
0

Year Published

2019
2019
2021
2021

Publication Types

Select...
3
3

Relationship

0
6

Authors

Journals

citations
Cited by 6 publications
(12 citation statements)
references
References 0 publications
0
12
0
Order By: Relevance
“…The desired -task dependent -output is then generated by a readout layer (usually linear) trained to match the states with the desired outputs. Despite the simplified training protocol, ESNs are universal function approximators 5 and have shown to be effective in many relevant tasks [6][7][8][9][10][11][12] .…”
Section: Introductionmentioning
confidence: 99%
“…The desired -task dependent -output is then generated by a readout layer (usually linear) trained to match the states with the desired outputs. Despite the simplified training protocol, ESNs are universal function approximators 5 and have shown to be effective in many relevant tasks [6][7][8][9][10][11][12] .…”
Section: Introductionmentioning
confidence: 99%
“…To ease readability, we opted to present just a natural language description of each query, instead of its formal representation in LASER. 6 The experiments were run on a computer with an Intel i7 8th gen hexacore processor, with GPU Nvidia 1050 Ti, and 8GB of RAM. We note that pre-processing the input data (for LASER) as well as encoding the input for the networks took only a few milliseconds per time step, which is why it is not reported individually, and that training the networks required on average not more than ten minutes for both kinds of networks, which is only once before applying the network, and, once trained, neural networks never required more than 300 µs to produce the output for one time step in any of the experiments.…”
Section: Description Of Experiments and Resultsmentioning
confidence: 99%
“…In this paper, we investigate the feasibility of using Deep Neural Networks to approximate stream reasoning with LASER, to take advantage of their high processing speed. We explore two types of neural networks, namely Convolutional Neural Networks (CNN) [28] and Recurrent Neural Networks (RNNs) [25], which have been shown to obtain good results when applied to time-annotated data problems, such as time series forecasting and classification [6,12]. For our experiments, we consider a real dataset with time-annotated sensor data on traffic, pollution and weather conditions, and explore different types of LASER queries, in order to cover different expressive features of its language.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…Reservoir Computing (RC) is a less conventional method for using Recurrent Neural Networks that has been widely used in applications such as time-series forecasting (Deihimi & Showkati, 2012;Bianchi et al, 2015b;a), process modelling (Rodan et al, 2017), speech analysis (Trentin et al, 2015), and classification of multivariant time series (Bianchi et al, 2018). RC models conceptually divide time-series processing into two components: (i) representation of temporal structure in the input stream through a non-adaptable dynamic reservoir (generated through the feedback-driven dynamics of a randomly drawn RNN), and (ii) an easy-toadapt readout from the reservoir.…”
Section: Reservoir Computingmentioning
confidence: 99%