Artificial Neural Network had gained a tremendous attention from researchers particularly because of the architecture of Artificial Neural Network that laid the foundation as a powerful technique in handling problems such as classification, pattern recognition, and data analysis. It is known for its data-driven, self-adaptive, and non-linear capabilities channel that is used in processing at high speed and the ability to learn the solution to a problem from a set of examples. Recently, research in Neural Network training has become a dynamic area of research, with the Multi-Layer Perceptron (MLP) trained with Back-Propagation (BP) was the most popular and been worked on by various researchers. In this study, the performance analysis based on BP training algorithms; gradient descent and gradient descent with momentum, both using the sigmoidal and hyperbolic tangent activation functions, coupled with pre-processing techniques are executed and compared. The Min-Max, Z-Score, and Decimal Scaling preprocessing techniques are analyzed. The simulations results generated from some selected benchmark datasets reveal that preprocessing the data greatly increase the ANN convergence, with Z-Score producing the overall best performance on all datasets.
Back Propagation (BP) is commonly used algorithm that optimize the performance of network for training multilayer feed-forward artificial neural networks. However, BP is inherently slow in learning and it sometimes gets trapped at local minima. These problems occur mailnly due to a constant and non-optimum learning rate (a fixed step size) in which the fixed value of learning rate is set to an initial starting value before training patterns for an input layer and an output layer. This fixed learning rate often leads the BP network towrds failure during steepest descent. Therefore to overcome the limitations of BP, this paper introduces an improvement to back propagation gradient descent with adapative learning rate (BPGD-AL) by changing the values of learning rate locally during the learning process. The simulation results on selected benchmark datasets show that the adaptive learning rate significantly improves the learning efficiency of the Back Propagation Algorithm.
This paper presents a new method to improve back propagation algorithm from getting stuck with local minima problem and slow convergence speeds which caused by neuron saturation in the hidden layer. In this proposed algorithm, each training pattern has its own activation functions of neurons in the hidden layer that are adjusted by the adaptation of gain parameters together with adaptive momentum and learning rate value during the learning process. The efficiency of the proposed algorithm is compared with the conventional back propagation gradient descent and the current working back propagation gradient descent with adaptive gain by means of simulation on three benchmark problems namely iris, glass and thyroid.
This paper presents the application of a combined approach of Higher Order Neural Networks and Recurrent Neural Networks, so called Jordan Pi-Sigma Neural Network (JPSN) for comprehensive temperature forecasting. In the present study, one-step-ahead forecasts are made for daily temperature measurement, by using a 5-year historical temperature measurement data. We also examine the effects of network parameters viz the learning factors, the higher order terms and the number of neurons in the input layer for selecting the best network architecture, using several performance measures. The comparison results show that the JPSN model can provide excellent fit and forecasts with reasonable results, therefore can be used as temperature forecasting tool.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.