2017 International Conference on Advances in Computing, Communications and Informatics (ICACCI) 2017
DOI: 10.1109/icacci.2017.8126033
|View full text |Cite
|
Sign up to set email alerts
|

Evaluation of sampling techniques in software fault prediction using metrics and code smells

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
8
0

Year Published

2018
2018
2023
2023

Publication Types

Select...
2
2
2

Relationship

0
6

Authors

Journals

citations
Cited by 8 publications
(8 citation statements)
references
References 13 publications
0
8
0
Order By: Relevance
“…Kendall rank correlation τ: Kendall rank correlation 27 is a nonparametric test that measures the strength of dependence between two variables. As shown in Equation (6). we consider two samples, a and b, where each sample size is n, and we know that the total number of pairings with a b is n (n À 1) /2.…”
Section: Multinomial Logistic Regressionmentioning
confidence: 99%
See 3 more Smart Citations
“…Kendall rank correlation τ: Kendall rank correlation 27 is a nonparametric test that measures the strength of dependence between two variables. As shown in Equation (6). we consider two samples, a and b, where each sample size is n, and we know that the total number of pairings with a b is n (n À 1) /2.…”
Section: Multinomial Logistic Regressionmentioning
confidence: 99%
“…The main problem with imbalanced classification is that it has a small number of training examples of the minority class for a model efficacious learn the decision boundary. SMOTE 6 has been applied by duplicate number of examples from the minority class in the training dataset before fitting a training model. SMOTE generate the training examples by using linear interpolation techniques, and it selects samples of feature space for each target class and its closest neighbors.…”
Section: The Proposed Learning Modelmentioning
confidence: 99%
See 2 more Smart Citations
“…However, the imbalanced distribution of software faults in source code leads to poor prediction power of machine learning techniques applied to predict source code defects such as bad smells [ 50 ]. Hassan [ 51 ] proposed the information theory concept to measure the amount of randomness or entropy of the distribution to quantify the code complexity as a result of code changes.…”
Section: Related Work and Motivationmentioning
confidence: 99%