2021
DOI: 10.1007/978-3-030-75075-6_10
|View full text |Cite
|
Sign up to set email alerts
|

An Empirical Study on Predictability of Software Code Smell Using Deep Learning Models

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
9
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
4
1
1

Relationship

0
6

Authors

Journals

citations
Cited by 15 publications
(9 citation statements)
references
References 12 publications
0
9
0
Order By: Relevance
“…As shown in Figure 4 and Figure 5, SMOTE does achieve significant improvement over the None technique on Data Class, God Class, and Long Method across our data sets, and obtains non-significant improvement on Feature Envy. Therefore, researchers and practitioners may still consider using SMOTE as a preprocessing method in line with previous studies Akhter et al (2021); Alkharabsheh et al (2021); Gupta et al (2021); Jain and Saha (2021); Stefano et al (2021); Khleel and Nehéz (2022);Kovačević et al (2022); Nanda and Chhabra (2022); Yedida and Menzies (2022), but should also consider exploring other techniques that may be more effective. Our results in Section 5.3 demonstrate that SMOTE does not consistently achieve the best performance on all four data sets, and the top-performing data resampling technique outperforms SMOTE by 2.63%-17.73% in terms of MCC.…”
Section: Discussionmentioning
confidence: 67%
See 2 more Smart Citations
“…As shown in Figure 4 and Figure 5, SMOTE does achieve significant improvement over the None technique on Data Class, God Class, and Long Method across our data sets, and obtains non-significant improvement on Feature Envy. Therefore, researchers and practitioners may still consider using SMOTE as a preprocessing method in line with previous studies Akhter et al (2021); Alkharabsheh et al (2021); Gupta et al (2021); Jain and Saha (2021); Stefano et al (2021); Khleel and Nehéz (2022);Kovačević et al (2022); Nanda and Chhabra (2022); Yedida and Menzies (2022), but should also consider exploring other techniques that may be more effective. Our results in Section 5.3 demonstrate that SMOTE does not consistently achieve the best performance on all four data sets, and the top-performing data resampling technique outperforms SMOTE by 2.63%-17.73% in terms of MCC.…”
Section: Discussionmentioning
confidence: 67%
“…Table 1 lists the summary of the literature review, where the first column presents the first author and the publication year, the second column presents the types of code smells, the third column presents the used machine learning algorithms, and the fourth column presents the code smell features, and the last one presents the imbalanced learning methods used in the study. Some researchers Akhter et al (2021); Alkharabsheh et al (2021); Gupta et al (2021); Jain and Saha (2021); Stefano et al (2021); Khleel and Nehéz (2022); Kovačević et al (2022); Nanda and Chhabra (2022); Yedida and Menzies (2022) applied SMOTE as a data preprocessing method to alleviate the class imbalance problem and then utilized or proposed some more advanced algorithms for CSD. For example, Akhter et al Akhter et al (2021) used the four machine learning classifiers (i.e., NB, RF, DT, and SVM) to investigate the effect of machine learning techniques on CSD.…”
Section: Imbalanced Learning For Csdmentioning
confidence: 99%
See 1 more Smart Citation
“…Table 1 lists the summary of the literature review, where the first column presents the references of studies, the second column presents the types of code smells, the third column presents the used machine learning algorithms, and the fourth column presents the code smell features, and the last one presents the imbalanced learning methods used in the study. Some researchers [50][51][52][53][55][56][57][58][59][60][61] applied SMOTE as a data preprocessing technique to alleviate the class imbalance problems, and utilized or proposed some more advanced algorithms for CSD. For example, Akhter et al 50 used the four machine learning classifiers (i.e., NB, RF, DT, and SVM) to investigate the effect of machine learning techniques on CSD.…”
Section: Imbalanced Learning For Csdmentioning
confidence: 99%
“…Gupta et al [39] recommended prediction of code smells using feature extraction from source code on eight types of code smells. They present the application of data sampling technique to handle the class imbalance problem and uses feature selection technique to find most relevant features sets.…”
Section: Related Workmentioning
confidence: 99%