2018
DOI: 10.1007/s10586-018-1884-x
|View full text |Cite
|
Sign up to set email alerts
|

Feature selection for software effort estimation with localized neighborhood mutual information

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
9
0
2

Year Published

2019
2019
2023
2023

Publication Types

Select...
7
1
1

Relationship

0
9

Authors

Journals

citations
Cited by 19 publications
(11 citation statements)
references
References 15 publications
0
9
0
2
Order By: Relevance
“…They conducted experiments on six datasets and the results showed that CFS ensembles achieved better performance than RReliefF ensembles. Liu et al [30] proposed a greedy feature selection method, called LFS, to guarantee the appropriateness of case based reasoning for software effort estimation task. They conducted experiments on six datasets and the results showed that the feature subset selected by LFS made effective estimation compared with a randomized baseline method.…”
Section: B Feature Selection In Software Engineeringmentioning
confidence: 99%
“…They conducted experiments on six datasets and the results showed that CFS ensembles achieved better performance than RReliefF ensembles. Liu et al [30] proposed a greedy feature selection method, called LFS, to guarantee the appropriateness of case based reasoning for software effort estimation task. They conducted experiments on six datasets and the results showed that the feature subset selected by LFS made effective estimation compared with a randomized baseline method.…”
Section: B Feature Selection In Software Engineeringmentioning
confidence: 99%
“…Therefore, performing simultaneous feature selection (FS) and parameter optimization (PO) will enhance the accuracy of SDCPM. Principally, the FS preprocessing step will make the data purer by removing unimportant features [5], while the PO step will find the best configuration that enhances the performance of the used SDCPM. Generally, FS methods are grouped into three categories: the embedded, filter, and wrapper technique.…”
Section: Introductionmentioning
confidence: 99%
“…Depending on dataset used, the preprocessing data can be cleaning data by imputing missing value or transforming and/or reducing the data by removing redundant and irrelevant features. As one of the major concerns when using dataset to construct a SDEE model is the negative impact of irrelevant and redundant information on estimation accuracy [8].…”
Section: Introductionmentioning
confidence: 99%
“…Hence, we need to remove irrelevant and redundant information and keep a subset of relevant features so only information about the effort (dependent variable) is reserved. For this purpose, many feature selection (FS) methods have been employed in the literature [8][9][10][11][12][13]. In this context, this paper aims to investigate the use of two feature selection methods as preprocessing step before feeding data to SVR model building stage.…”
Section: Introductionmentioning
confidence: 99%