2014 International Conference on Information Systems and Computer Networks (ISCON) 2014
DOI: 10.1109/iciscon.2014.6965232
|View full text |Cite
|
Sign up to set email alerts
|

Prediction of software defects using Twin Support Vector Machine

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
8
0

Year Published

2015
2015
2024
2024

Publication Types

Select...
7
1
1

Relationship

1
8

Authors

Journals

citations
Cited by 20 publications
(8 citation statements)
references
References 9 publications
0
8
0
Order By: Relevance
“…In other words, it can select those features that are more relevant and irredundant to the class label of the dataset among the features. Therefore, introducing FS methods into SDP can solve the high dimensionality problem [17][18][19]. FS is a vital data pre-processing step in classification processes as it improves the quality of data and consequently improves the predictive performance of the prediction models.…”
Section: Introductionmentioning
confidence: 99%
“…In other words, it can select those features that are more relevant and irredundant to the class label of the dataset among the features. Therefore, introducing FS methods into SDP can solve the high dimensionality problem [17][18][19]. FS is a vital data pre-processing step in classification processes as it improves the quality of data and consequently improves the predictive performance of the prediction models.…”
Section: Introductionmentioning
confidence: 99%
“…Eight metrics out of the 22 available metrics were selected, thus reducing the feature space by almost 63%. The selected metrics are 7 predictors (1,4,9,11,14,15,17) in addition to the target attribute 22 (description is shown in Appendix A).…”
Section: Svm Defect Prediction Using a Subset Of Metrics Reduced By Cfs Esmentioning
confidence: 99%
“…A support vector machine (SVM) is a well-known classification technique in machine learning. It has been tested on various applications in pattern recognition, including software defect prediction [4][5][6][7][8][9]. Vapnik developed the principle of SVM in 1995 [10].…”
Section: Introductionmentioning
confidence: 99%
“…SVM has remarkable advantages as it utilizes the idea of structural risk minimization (SRM) principle which provides better generalization as well as reduces error in the training phase. As a result of its superior performance even in non-linear classification problems, it has been implemented in a diverse spectrum of research fields, ranging from text classification, face recognition, financial application, brain-computer interface, bio-medicine to human action recognition [1,12,54,93,99,100,148,188,193,204,205]. Although SVM has outperformed most other systems, it still has many limitations in dealing with complex data due to its high computational cost of solving QPPs and its performance highly depends upon the choice of kernel functions and its parameters.…”
Section: Introductionmentioning
confidence: 99%