It is well known that the Extended Kalman Filter (EKF) neural network training algorithm is superior to the standard backpropagation algorithm.However. there are many variations on the EKF implementation that can significantly effect its performance. For example, improper initialization of three parameters cause the algorithm to perform poorly. There are also two advanced methods, de-coupling and multistreaming, which need to be properly applied based on the specifics of the problem. This paper presents the results of extensive experimentation in applying the EKF training method for recurrent and static neural networks. The goal is to demonstrate how different variations on its implementation effect performance and to find methods to optimize performance. The paper examines the effects of decoupling, multi-streaming, and initial values of constants used by the algorithm. Three new ideas are suggested that can lead to improved performance.These ideas are: initializing parameters to values outside the range previously suggested, a new decoupling strategy, and reducing the update rate of the error covariance matrix for faster training.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.