1995
DOI: 10.1007/bf02311573
|View full text |Cite
|
Sign up to set email alerts
|

Adaptative time constants improve the prediction capability of recurrent neural networks

Abstract: Abstract. Classical statistical techniques for prediction reach their limitations in applications with nonlinearities in the data set; nevertheless, neural models can counteract these limitations. In this paper, we present a recurrent neural model where we associate an adaptative time constant to each neuron-like unit and a learning algorithm to train these dynamic recurrent networks. We test the network by training it to predict the Mackey-Glass chaotic signal. To evaluate the quality of the prediction, we co… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
10
0

Year Published

1997
1997
2013
2013

Publication Types

Select...
4
2
1

Relationship

2
5

Authors

Journals

citations
Cited by 11 publications
(10 citation statements)
references
References 12 publications
0
10
0
Order By: Relevance
“…Successful trainings were also used to produce kinematic patterns from unknown inputs (Figure 1B) aiming to produce walking for multiple purposes, such as virtual avatars or robotic exoskeletons. The training is supervised, involving learning rule adaptations of synaptic weights and time constant of each unit (Draye et al, 1995, 1996). A specific training procedure using Almeida algorithm was used to optimize learning performance (Cheron et al, 2011).…”
Section: Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…Successful trainings were also used to produce kinematic patterns from unknown inputs (Figure 1B) aiming to produce walking for multiple purposes, such as virtual avatars or robotic exoskeletons. The training is supervised, involving learning rule adaptations of synaptic weights and time constant of each unit (Draye et al, 1995, 1996). A specific training procedure using Almeida algorithm was used to optimize learning performance (Cheron et al, 2011).…”
Section: Methodsmentioning
confidence: 99%
“…Introduction of timing allows modeling of more complex frequency behavior, improves the non-linearity effect of the sigmoid function and the memory effect of time delays (Draye et al, 1995). The distribution of the time constant and the synaptic weights between units (Draye et al, 1996) after learning was analyzed after multiple pattern learning and prediction.…”
Section: Methodsmentioning
confidence: 99%
“…They are thus particularly suitable for adaptive temporal processing (e.g., system identification, time series prediction, and control). In previous work, we proved that the recurrent aspect of the model greatly improves the network performance (Draye et al 1995.…”
Section: Identification Of the Emg-motion Relationship Using Dynamic mentioning
confidence: 99%
“…The time constants will be considered in the learning process too because they play an important role in the dynamical performance of the network (Draye et al, 1995;1996). the activation function, Xj the total input of the neuron, the W kj the weights for the k incoming signals Yk on neuron i and OJ an external input or bias.…”
Section: Time Dependent Recurrent Backpropagation: Learning Rulesmentioning
confidence: 99%