1995
DOI: 10.1109/72.471372
|View full text |Cite
|
Sign up to set email alerts
|

Neural modeling for time series: A statistical stepwise method for weight elimination

Abstract: Many authors use feedforward neural networks for modeling and forecasting time series. Most of these applications are mainly experimental, and it is often difficult to extract a general methodology from the published studies. In particular, the choice of architecture is a tricky problem. We try to combine the statistical techniques of linear and nonlinear time series with the connectionist approach. The asymptotical properties of the estimators lead us to propose a systematic methodology to determine which wei… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
82
0
3

Year Published

1998
1998
2024
2024

Publication Types

Select...
7
2

Relationship

0
9

Authors

Journals

citations
Cited by 177 publications
(88 citation statements)
references
References 20 publications
0
82
0
3
Order By: Relevance
“…The most popular algorithm for training is the well-known backpropagation [54] which is basically a gradient steepest descent method with a constant step size. Due to problems of slow convergence and ineciency with the steepest descent method, many variations of backpropagation have been introduced for training neural networks [5,13,41]. Recently, Hung and Denton [27] and Subramanian and Hung [59] have proposed to use a general-purpose nonlinear optimizer, GRG2, in training neural networks.…”
Section: Neural Networkmentioning
confidence: 99%
“…The most popular algorithm for training is the well-known backpropagation [54] which is basically a gradient steepest descent method with a constant step size. Due to problems of slow convergence and ineciency with the steepest descent method, many variations of backpropagation have been introduced for training neural networks [5,13,41]. Recently, Hung and Denton [27] and Subramanian and Hung [59] have proposed to use a general-purpose nonlinear optimizer, GRG2, in training neural networks.…”
Section: Neural Networkmentioning
confidence: 99%
“…This is particularly important as some time series exhibit long-term cyclical behaviour, which is often ignored. Furthermore, as for other types of neural network, a TDNN cannot predict trend because the output of the network is typically computed using a threshold function, with the output of the function bounded to values, say, between 0 and 1 [7]. Autoregressive models, such as the seasonal autoregressive integrated moving average model (SARIMA), have been used to generate deseasonalised and de-trended time series, with the residuals used to train a TDNN, obtaining good results [2,3,4].…”
mentioning
confidence: 99%
“…If the set V k is no longer empty, then the age k variable is set back to 0. After the application of the operators, all the unfrozen parameters (sub-networks) are updated according to equations (5), (8) and (9). Afterwards, the whole net architecture is frozen.…”
Section: Sonfis: Self-organizing Neuro-fuzzy Inference Systemmentioning
confidence: 99%
“…The users, based on their empirical intuitions and experience, are usually the ones that select the appropriate architecture to solve specific learning problems. For example, both the selection of the number of hidden neurons of a Feedforward Artificial Neural Network 7,9 and the activation functions in an Adaptive Network 13 are crucial and difficult decisions for system modeling in many real world problems. If the model size (complexity) is underestimated the model will not be able to effectively solve the problem, while an overly large size tends to overfit the training data and consequently results in poor generalization performance 28 .…”
Section: Introductionmentioning
confidence: 99%