The platform will undergo maintenance on Sep 14 at about 7:45 AM EST and will be unavailable for approximately 2 hours.
2019
DOI: 10.1007/978-981-13-8676-3_44
|View full text |Cite
|
Sign up to set email alerts
|

Effective Software Fault Localization Using a Back Propagation Neural Network

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
5
2

Relationship

0
7

Authors

Journals

citations
Cited by 9 publications
(3 citation statements)
references
References 24 publications
0
3
0
Order By: Relevance
“…Figure 2 have much effect on increasing network accuracy. Increasing the number of neural cells in the middle layer increases the accuracy of the neural network, but excessively increasing the number of neural cells decreases the accuracy of the network because any increase will not always improve the network [26]. MATLAB has been used to implement DL.…”
Section: The Study Area and Resultsmentioning
confidence: 99%
“…Figure 2 have much effect on increasing network accuracy. Increasing the number of neural cells in the middle layer increases the accuracy of the neural network, but excessively increasing the number of neural cells decreases the accuracy of the network because any increase will not always improve the network [26]. MATLAB has been used to implement DL.…”
Section: The Study Area and Resultsmentioning
confidence: 99%
“…Maru et al [25] utilized a back propagation neural network in which the actual number of times the statement is executed to train the network and got a 35% increase in the effectiveness over existing BPNN. Zakari et al [26] reviewed existing research on Multiple fault localization (MFL) in software fault localization.…”
Section: Literature Reviewmentioning
confidence: 99%
“…One possible solution to more accurate pose estimation is introducing deep learning models to RANSAC. But the arg max selection function is non‐differentiable, which means the gradient of objective function cannot be back‐propagated in the network [44] during training. So, softmax function is utilized to make the hypothesis selection differentiable.…”
Section: Log‐slam Systemmentioning
confidence: 99%