2011
DOI: 10.5121/ijsc.2011.2204
|View full text |Cite
|
Sign up to set email alerts
|

Determination of Over-Learning and Over-Fitting Problem in Back Propagation Neurl Network

Abstract: A drawback of the error-back propagation algorithm for a multilayer feed forward neural network is over learning or over fitting. We have discussed this problem, and obtained necessary and sufficient Experiment and conditions for over-learning problem to arise. Using those conditions and the concept of a reproducing, this paper proposes methods for choosing training set which is used to prevent over-learning. For a classifier, besides classification capability, its size is another fundamental aspect. In pursui… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
21
0
2

Year Published

2013
2013
2023
2023

Publication Types

Select...
4
3
1

Relationship

0
8

Authors

Journals

citations
Cited by 58 publications
(23 citation statements)
references
References 8 publications
0
21
0
2
Order By: Relevance
“…The first simple observation confirms that the steady state gain is nonlinear and depends on the fuel-flow operating range. The second more complicated observation is related to the over-learning phenomenon which is commonly found on auto-constructing identification methods such as auto-constructing neural networks [10,11].…”
Section: ( )]mentioning
confidence: 99%
“…The first simple observation confirms that the steady state gain is nonlinear and depends on the fuel-flow operating range. The second more complicated observation is related to the over-learning phenomenon which is commonly found on auto-constructing identification methods such as auto-constructing neural networks [10,11].…”
Section: ( )]mentioning
confidence: 99%
“…Как известно, для каждой задачи требуется разное количество данных для обучения ИНС. Заранее затруднительно определить, какой объем данных для обучения необходим -недостаточное количество приведет к неточности модели, а излишнее обучение, или переобучение, заставит ИНС не обучаться, а зазубривать данные, что приведет к шуму, который влияет на точность результата [21]. Тем не менее, существуют методы статистической эвристики, которые позволяют определить подходящий размер выборки -данные методы выглядят как специальные коэффициенты масштабирования [22], например, увеличение выборки на определенную константу либо определенный процент для достижения желаемого уровня достоверности.…”
Section: проблема количества данных для обученияunclassified
“…(2) A binary classifier is usually used as the face recognizer, which is trained to distinguish between two photos belonging to the same person. Because only one or a limited number of photos per person are available to train the recognizer, and due to the over-fitting problem [13] in machine learning, a binary classifier usually achieves a better result than a multiclass classifier. Consequently, heavily unbalanced results ensue, i.e., much more false results occur on positive samples than on negative ones [14].…”
Section: Facial Recognitionmentioning
confidence: 99%
“…When we give a robot a front face photo to learn the features of a new person, we are looking for a mixture matrix A, as in Equation (13), to combine the pre-trained classifiers to obtain a higher recognition rate. Equation 14is a combination of M classifiers, where (•) means the Hadamard product (entrywise product):…”
Section: Classifier Combinationsmentioning
confidence: 99%
See 1 more Smart Citation