2008
DOI: 10.1162/neco.2008.09-07-615
|View full text |Cite
|
Sign up to set email alerts
|

Long-Range Out-of-Sample Properties of Autoregressive Neural Networks

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2011
2011
2020
2020

Publication Types

Select...
3

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(3 citation statements)
references
References 4 publications
0
3
0
Order By: Relevance
“…What is required, in order to translate our account into neural terms, is an account of how this marginalization operation might be carried out in a neural network. Fortunately, a number of recent theoretical papers have addressed just this problem (Beck & Pouget, 2007; Deneve, 2008; Lee & Mumford, 2003; Litvak & Ullman, 2009; Ma, Beck, Latham, & Pouget, 2006; Pouget, Dayan, & Zemel, 2003; Rao, 2006). One approach that is particularly well suited to the present application was proposed by Rao (2005).…”
Section: Neural Network Modelmentioning
confidence: 99%
“…What is required, in order to translate our account into neural terms, is an account of how this marginalization operation might be carried out in a neural network. Fortunately, a number of recent theoretical papers have addressed just this problem (Beck & Pouget, 2007; Deneve, 2008; Lee & Mumford, 2003; Litvak & Ullman, 2009; Ma, Beck, Latham, & Pouget, 2006; Pouget, Dayan, & Zemel, 2003; Rao, 2006). One approach that is particularly well suited to the present application was proposed by Rao (2005).…”
Section: Neural Network Modelmentioning
confidence: 99%
“…The first class of approaches builds on deterministic methods for evaluating exactly or approximately the desired conditional and/or marginal distributions, whereas the second class relies on sampling from the probability distributions in question. Multiple models in the class of deterministic approaches implement algorithms from machine learning called message passing or belief propagation [30][33]. By clever reordering of sum and product operators occurring in the evaluation of the desired probabilities, the total number of computation steps are drastically reduced.…”
Section: Introductionmentioning
confidence: 99%
“…ARIMA has an in-built mechanism to transform a nonstationary time series into a stationary time series by taking the differencing of the given time series (Brockwell and Lindner 2010). ANNs are asymptotically stationary and it requires the process to be stationary while training the neural network, but when applied to a nonstationary process, the out of sample predictions becomes poor (Leoni 2009). In the hybrid formulation based on ARIMA and ARNN model, this problem can be overcome since we deal with only the additive error terms generated by ARIMA and model it using ARNN model.…”
Section: Asymptotic Stationarity Of the Proposed Modelmentioning
confidence: 99%