In this paper, we created a dynamic function training rate for the Back propagation learning algorithm to avoid the local minimum and to speed up training. The Back propagation with dynamic training rate (BPDR) algorithm uses the sigmoid function. The 2-dimensional XOR problem and iris data were used as benchmarks to test the effects of the dynamic training rate formulated in this paper. The results of these experiments demonstrate that the BPDR algorithm is advantageous with regards to both generalization performance and training speed. The stop training or limited error was determined by1.0
<span lang="EN-US">The man problem of batch back propagation (BBP) algorithm is slow training and there are several parameters needs to be adjusted manually, also suffers from saturation training.</span><span lang="EN-US">The learning rate and momentum factor are significant parameters for increasing the efficiency of the (BBP). In this study, we created a new dynamic function of each learning rate and momentum facor. We present the DBBPLM algorithm, which trains with a dynamic function for each the learning rate and momentum factor.<br /> A Sigmoid function used as activation function. The XOR problem, balance, breast cancer and iris dataset were used as benchmarks for testing the effects of the dynamic DBBPLM algorithm. All the experiments were performed on Matlab 2012 a. The stop training was determined ten power -5. From the experimental results, the DBBPLM algorithm provides superior performance in terms of training, and faster training with higher accuracy compared to the BBP algorithm and with existing works.</span>
Abstract-The main problem of batch back propagation (BBP) algorithm is slow training and there are several parameters need to be adjusted manually, such as learning rate. In addition, the BBP algorithm suffers from saturation training. The objective of this study is to improve the speed up training of the BBP algorithm and to remove the saturation training. The training rate is the most significant parameter for increasing the efficiency of the BBP. In this study, a new dynamic training rate is created to speed the training of the BBP algorithm. The dynamic batch back propagation (DBBPLR) algorithm is presented, which trains with adynamic training rate. This technique was implemented with a sigmoid function. Several data sets were used as benchmarks for testing the effects of the created dynamic training rate that we created. All the experiments were performed on Matlab. From the experimental results, the DBBPLR algorithm provides superior performance in terms of training, faster training with higher accuracy compared to the BBP algorithm and existing works.Keywords-artificial neural network (ANN), batch back propagation algorithm, dynamic training rate, speed up training, accuracy training.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.