2019 IEEE 43rd Annual Computer Software and Applications Conference (COMPSAC) 2019
DOI: 10.1109/compsac.2019.10229
|View full text |Cite
|
Sign up to set email alerts
|

Software Fault Proneness Prediction with Group Lasso Regression: On Factors that Affect Classification Performance

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

2
18
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
5
1

Relationship

1
5

Authors

Journals

citations
Cited by 9 publications
(20 citation statements)
references
References 36 publications
2
18
0
Order By: Relevance
“…Many different machine learning algorithms have been used in building software fault-proneness prediction models. These include J48 (Moser et al, 2008;Kamei et al, 2010;Krishnan et al, 2013), Random Forest (RF) (Guo et al, 2004;Mahmood et al, 2018;Fiore et al, 2021;Gong et al, 2021), and combinations of several machine learning algorithms, e.g., OneR, J48, and Naïve Bayes (NB) in (Menzies et al, 2007), RF, NB, RPart, and SVM in (Bowes et al, 2018), J48, RF, NB, Logistic Regression (LR), PART, and G-Lasso in (Goseva-Popstojanova et al, 2019), and Decision Tree (DT), k-Nearest Neighbor (kNN), LR, NB, and RF in (Kabir et al, 2021). With recent advances in Deep Neural Networks (DNN), some software fault-proneness prediction studies used deep learning (Wang et al, 2016;Li et al, 2017;Pang et al, 2017;Zhou et al, 2019;Zhao et al, 2021).…”
Section: Related Workmentioning
confidence: 99%
See 4 more Smart Citations
“…Many different machine learning algorithms have been used in building software fault-proneness prediction models. These include J48 (Moser et al, 2008;Kamei et al, 2010;Krishnan et al, 2013), Random Forest (RF) (Guo et al, 2004;Mahmood et al, 2018;Fiore et al, 2021;Gong et al, 2021), and combinations of several machine learning algorithms, e.g., OneR, J48, and Naïve Bayes (NB) in (Menzies et al, 2007), RF, NB, RPart, and SVM in (Bowes et al, 2018), J48, RF, NB, Logistic Regression (LR), PART, and G-Lasso in (Goseva-Popstojanova et al, 2019), and Decision Tree (DT), k-Nearest Neighbor (kNN), LR, NB, and RF in (Kabir et al, 2021). With recent advances in Deep Neural Networks (DNN), some software fault-proneness prediction studies used deep learning (Wang et al, 2016;Li et al, 2017;Pang et al, 2017;Zhou et al, 2019;Zhao et al, 2021).…”
Section: Related Workmentioning
confidence: 99%
“…Static code metrics are collected from the software source code or binary code units (Koru and Liu, 2005;Menzies et al, 2007;Lessmann et al, 2008;Menzies et al, 2010;He et al, 2013;Ghotra et al, 2015;Bowes et al, 2018;Kabir et al, 2021). Change metrics, sometimes called process metrics, are collected from the projects' development history (i.e., commit logs) and bug tracking systems (Nagappan et al, 2010;Giger et al, 2011;Krishnan et al, 2011Krishnan et al, , 2013Goseva-Popstojanova et al, 2019). Social metrics are extracted from the communications among developers and/or users of a software project (Bird et al, 2009).…”
Section: Related Workmentioning
confidence: 99%
See 3 more Smart Citations