2012
DOI: 10.47893/ijcct.2012.1105
|View full text |Cite
|
Sign up to set email alerts
|

Predicting Fault-prone Software Module Using Data Mining Technique and Fuzzy Logic

Abstract: This paper discusses a new model towards reliability and quality improvement of software systems by predicting fault-prone module before testing. Model utilizes the classification capability of data mining techniques and knowledge stored in software metrics to classify the software module as fault-prone or not fault-prone. A decision tree is constructed using ID3 algorithm for existing project data in order to gain information for the purpose of decision making whether a particular module id fault-prone or not… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
4
0

Year Published

2015
2015
2024
2024

Publication Types

Select...
4
1
1

Relationship

0
6

Authors

Journals

citations
Cited by 8 publications
(4 citation statements)
references
References 28 publications
0
4
0
Order By: Relevance
“…The concept of code complexity is not an atomic concept. AK Pandey (2010) (HC) to classify the software module as fault prone or not. Regarding the previous studies (Pandey, 2010;Zhou et al, 2010), LOC, CM and HC are indeed better fault-prone predictors than other complexity metrics.…”
Section: Local Bestmentioning
confidence: 99%
See 1 more Smart Citation
“…The concept of code complexity is not an atomic concept. AK Pandey (2010) (HC) to classify the software module as fault prone or not. Regarding the previous studies (Pandey, 2010;Zhou et al, 2010), LOC, CM and HC are indeed better fault-prone predictors than other complexity metrics.…”
Section: Local Bestmentioning
confidence: 99%
“…AK Pandey (2010) (HC) to classify the software module as fault prone or not. Regarding the previous studies (Pandey, 2010;Zhou et al, 2010), LOC, CM and HC are indeed better fault-prone predictors than other complexity metrics. In the proposed method, the combination of LOC, CM and HC is used to measure the complexity of a program source code; after measuring the complexity weight of each blocks using proposed equations the most fault-prone paths of the program are identified by the proposed FA.…”
Section: Local Bestmentioning
confidence: 99%
“…Measurements help differ slightly with selection of programming language, nevertheless most usually caught ones are-techniques per class, percent branch statements, LOC, maximum method complexity, classes & interfaces, & percent lines with the comments. Principally a JHawk, java metric tool [13] need advanced from a stand-alone GUI provision to incorporate an order transport form and an eclipse plug in. It compromises to process IDE coordination (for Visual period for Java) Also gives the HTML, XML, and CSV send out formats.…”
Section: Source Code Monitor (Sm)mentioning
confidence: 99%
“…We use BayesNet in our experiment because it is robust to overfitting and does not assume data independence. As a matter of fact, many machine learning techniques such as neural networks[7],[37],[67], decision trees[6],[27],[28], case-based reasoning[36],[38],[55], Naïve Bayes[15],[31],[44], fuzzy logic[56], logistic regression[5],[9],[16], SVM[20],[25],[26], random forests[39],[63], and so on have been used for predicting software fault-proneness in the past. We want to emphasize that the focus of this study is to evaluate prediction effectiveness of the metrics derived from newly designed social networks.…”
mentioning
confidence: 99%