2016 2nd International Conference on Contemporary Computing and Informatics (IC3I) 2016
DOI: 10.1109/ic3i.2016.7917934
|View full text |Cite
|
Sign up to set email alerts
|

Comparison of back propagation training algorithms for software defect prediction

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
13
0

Year Published

2018
2018
2023
2023

Publication Types

Select...
7
2

Relationship

0
9

Authors

Journals

citations
Cited by 15 publications
(14 citation statements)
references
References 22 publications
1
13
0
Order By: Relevance
“…BRM approach works similar to the Levenberg-Marquardt optimization, in a sense that it minimises the squared errors and the weights and finds the optimal combination so that the network performs well. Bayesian based NN training is more robust than the standard back propagation nets [52]. When the parameters (weights and biases) increase, the network loses its ability to generalize.…”
Section: Bayesian Regularization Backpropagation Algorithmmentioning
confidence: 99%
“…BRM approach works similar to the Levenberg-Marquardt optimization, in a sense that it minimises the squared errors and the weights and finds the optimal combination so that the network performs well. Bayesian based NN training is more robust than the standard back propagation nets [52]. When the parameters (weights and biases) increase, the network loses its ability to generalize.…”
Section: Bayesian Regularization Backpropagation Algorithmmentioning
confidence: 99%
“…Hence, the results proved that Bayesian Regularization gave better results as compared with the other two approaches. is finding is similar to [28,50] but different from [33]. GSD faces heterogeneous work environments that results in multiple challenges which have been highlighted in this research study.…”
Section: Discussionmentioning
confidence: 54%
“…BR is also known as one of the widely used learning techniques in BP processes. The optimization process in a BR learning technique is similar to the learning technique of the LM method as it tunes the weights, minimizes errors, and finally extracts the best combination so that the ANN can perform better [37,38]. The SCG technique uses a mechanism called step-size scaling that decreases the time used inline searching in each learning iteration [39].…”
Section: Methodsmentioning
confidence: 99%