2021
DOI: 10.1038/s41467-021-25801-2
|View full text |Cite
|
Sign up to set email alerts
|

Next generation reservoir computing

Abstract: Reservoir computing is a best-in-class machine learning algorithm for processing information generated by dynamical systems using observed time-series data. Importantly, it requires very small training data sets, uses linear optimization, and thus requires minimal computing resources. However, the algorithm uses randomly sampled matrices to define the underlying recurrent neural network and has a multitude of metaparameters that must be optimized. Recent results demonstrate the equivalence of reservoir computi… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

3
180
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
6
1

Relationship

0
7

Authors

Journals

citations
Cited by 296 publications
(232 citation statements)
references
References 35 publications
3
180
0
Order By: Relevance
“…In both cases there is an improvement in the performance with the correct choice of the delayed input. It is has been demonstrated that the Lorenz x one step ahead prediction requires only the very recent history of the x-variable time series [13], and we find the optimal input delay of d = 1 to be consistent with this prior knowledge. For the Lorenz z cross-prediction task, on the other hand, there is a strong dependence on the history of the Lorenz x variable.…”
Section: Resultssupporting
confidence: 68%
See 1 more Smart Citation
“…In both cases there is an improvement in the performance with the correct choice of the delayed input. It is has been demonstrated that the Lorenz x one step ahead prediction requires only the very recent history of the x-variable time series [13], and we find the optimal input delay of d = 1 to be consistent with this prior knowledge. For the Lorenz z cross-prediction task, on the other hand, there is a strong dependence on the history of the Lorenz x variable.…”
Section: Resultssupporting
confidence: 68%
“…In a recent paper [ 13 ], the authors aim to eliminate the issue of hyperparameter optimisation altogether by removing the reservoir. Their approach essentially takes the well-known nonlinear vector autoregression (NVAR) method, uses a less parsimonious approach to filling the feature vector, and adds Tikhonov regularisation.…”
Section: Introductionmentioning
confidence: 99%
“…3a 126 with G 2 = 0) is reasonable. Moreover, this finding supports the general observation that only the very recent history of the x-variable time series [13], and we find the optimal parameters and was also demonstrated for a time-continuous system, indicating that 180 our approach is applicable to a wide range of reservoirs. for a reservoir to yield good performance on a range of tasks by only tuning the delayed 202 input parameters, remain to be determined.…”
supporting
confidence: 86%
“…In a recent paper [13], the authors aim to eliminate the issue of hyperparameter 31 optimisation altogether by removing the reservoir. Their approach essentially takes the 32 well known nonlinear vector autoregression (NVAR) method, uses a less parsimonious 33 approach to filling the feature vector and adds Tikhonov regularisation.…”
mentioning
confidence: 99%
See 1 more Smart Citation