1996
DOI: 10.1109/72.508935
|View full text |Cite
|
Sign up to set email alerts
|

A theoretical comparison of batch-mode, on-line, cyclic, and almost-cyclic learning

Abstract: We study and compare different neural network learning strategies: batch-mode learning, online learning, cyclic learning, and almost-cyclic learning. Incremental learning strategies require less storage capacity than batch-mode learning. However, due to the arbitrariness in the presentation order of the training patterns, incremental learning is a stochastic process; whereas batch-mode learning is deterministic. In zeroth order, i.e., as the learning parameter eta tends to zero, all learning strategies approxi… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
25
0

Year Published

2000
2000
2024
2024

Publication Types

Select...
5
4

Relationship

0
9

Authors

Journals

citations
Cited by 60 publications
(25 citation statements)
references
References 15 publications
0
25
0
Order By: Relevance
“…The idea of averaging over errors and updating the weights on a slower time scale than sample presentation is well known from batch learning methods [20]. In those methods, errors are determined over a whole sweep through the pattern set and subsequently weights are updated synchronously.…”
Section: I+j−1 J=imentioning
confidence: 99%
“…The idea of averaging over errors and updating the weights on a slower time scale than sample presentation is well known from batch learning methods [20]. In those methods, errors are determined over a whole sweep through the pattern set and subsequently weights are updated synchronously.…”
Section: I+j−1 J=imentioning
confidence: 99%
“…Batch training enables one to deal more efficiently with large data sets, by dividing the patterns into data batches of fixed length N and by applying algorithms to one batch at a time [15]. In Fig.…”
Section: Batch and Iterated Versions: The Bmekf And Ibmekf Algorithmsmentioning
confidence: 99%
“…The back-propagation algorithm (BPA) is a common and widely used method for artificial neural networks training [1][2][3][4]. There are two practical ways to implement it: batch learning and online learning.…”
Section: Introductionmentioning
confidence: 99%
“…The online mode of BPA is popular due to its simplicity to implement and less likely to be trapped in a local minimum, while the stochastic nature makes it difficult to establish theoretical conditions for convergence of the algorithm [2,3]. In contrast, the use of batch mode of training provides an accurate estimate of the gradient vector, convergence to a local minimum is thereby guaranteed under simple conditions [1][2][3]. However, the success of BPA needs a priori specification of a network structure.…”
Section: Introductionmentioning
confidence: 99%