Proceedings of ICICS, 1997 International Conference on Information, Communications and Signal Processing. Theme: Trends in Info
DOI: 10.1109/icics.1997.652068
|View full text |Cite
|
Sign up to set email alerts
|

A stochastic backpropagation algorithm for training neural networks

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
5
0

Publication Types

Select...
3
2
1

Relationship

0
6

Authors

Journals

citations
Cited by 7 publications
(5 citation statements)
references
References 5 publications
0
5
0
Order By: Relevance
“…The standard approach of feature generation G(n) of specific iteration (i) is based on a series of virtual binary image generation. It is calculated as the sum of output quantities (Yi) of activated neurons in the given iteration step [19,20]:…”
Section: B Pcnn and Feature Generationmentioning
confidence: 99%
See 1 more Smart Citation
“…The standard approach of feature generation G(n) of specific iteration (i) is based on a series of virtual binary image generation. It is calculated as the sum of output quantities (Yi) of activated neurons in the given iteration step [19,20]:…”
Section: B Pcnn and Feature Generationmentioning
confidence: 99%
“…The McCulloch-Pitts model initiated the use of weighted summing units as the basic neuron model [20]. It is commonly known that there is only a minor closeness between this neuron model and the behavior of real biological neurons.…”
Section: Multi-stage Multiplicative Neural Network (Mmnn)mentioning
confidence: 99%
“…For evaluating the performance of RFBP learning algorithm we proposed, the classifications of the non-convex in two dimensions (NC2) problem [16] are simulated. Three different learning methods are experimented for comparison, including conventional BP learning with constant parameters, stochastic BP learning and RFBP learning.…”
Section: Simulationsmentioning
confidence: 99%
“…In [16], a stochastic BP learning algorithm for dealing with the training problem of neural network is proposed by Chen et al A cost function is taken as the following form:…”
Section: Rfbp Learning Algorithmmentioning
confidence: 99%
“…Drago et al have proposed an adaptive momentum BP for fast minimum search [5]. A randomized BP algorithm is proposed by Chen et al It is obtained by choosing a sequence of weighting vectors over the learning phase [6]. A new generalized BP algorithm is proposed in [7] to change the derivative of the activation function so as to magnify the backward propagated error signal, thus the convergence rate can be accelerated and the local minimum can be escaped.…”
Section: Introductionmentioning
confidence: 99%