2019
DOI: 10.1002/smr.2180
|View full text |Cite
|
Sign up to set email alerts
|

Analysis of cluster center initialization of 2FA‐kprototypes analogy‐based software effort estimation

Abstract: Analogy‐based estimation is one of the most widely used techniques for effort prediction in software engineering. However, existing analogy‐based techniques suffer from an inability to correctly handle nonquantitative data. To deal with this limitation, a new technique called 2FA‐kprototypes was proposed and evaluated. 2FA‐kprototypes is based on the use of the fuzzy k‐prototypes clustering technique. Although fuzzy k‐prototypes algorithms are well known for their efficiency in clustering numerical and categor… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2

Citation Types

0
4
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
2
1

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(4 citation statements)
references
References 61 publications
0
4
0
Order By: Relevance
“…There are various similarity functions in the ABE methods, for example, Euclidean (EUC) similarity function, 5,13,14, Manhattan (MHT) similarity function, 7,18,57,58,[60][61][62][63][64][65][66] maximum distance similarity function, 8,67,68 Minkowski (MKS) similarity function, 46,57 fuzzy, 69,70 and optimized induced learning (OIL). 71 The performance of the different similarity functions may vary based on the type of features (numerical, ordinal, or nominal) and the distribution of data samples in N-dimensional feature space.…”
Section: Evaluation Structures In Abe Techniquesmentioning
confidence: 99%
See 1 more Smart Citation
“…There are various similarity functions in the ABE methods, for example, Euclidean (EUC) similarity function, 5,13,14, Manhattan (MHT) similarity function, 7,18,57,58,[60][61][62][63][64][65][66] maximum distance similarity function, 8,67,68 Minkowski (MKS) similarity function, 46,57 fuzzy, 69,70 and optimized induced learning (OIL). 71 The performance of the different similarity functions may vary based on the type of features (numerical, ordinal, or nominal) and the distribution of data samples in N-dimensional feature space.…”
Section: Evaluation Structures In Abe Techniquesmentioning
confidence: 99%
“…7,14,20,37,40,50 In the ABE methods, the effort of a new project can be estimated based on the efforts of K similar projects utilizing an adaptation function. In the majority of the previous ABE methods, mean, 5,14,[36][37][38][39][40][41][42][43][44][45][46][47][49][50][51][52][53][54]56,57,64,68 median, 8,67 and inverse rank weighted mean 7,18,48,51,55,62,63,66 are the mostly used adaptation functions.…”
Section: Evaluation Structures In Abe Techniquesmentioning
confidence: 99%
“…The solutions obtained by global methods cannot provide a sufficiently good approximation. In contrast, Local learning methods divide the global learning problem into multiple simpler local learning models which reasonably increase predictive performance [1], [3], [8]. This can be achieved by dividing the cost function into multiple independent local cost functions.…”
Section: Introductionmentioning
confidence: 99%
“…This can be achieved by dividing the cost function into multiple independent local cost functions. In fact, the use of data locality to build prediction models has shown great interest within the research community as it helps to minimize model complexity, reduce bias and enhance accuracy [8][9][10] [11]. Since effort estimation datasets tend to be rather small and heterogeneous, the locality approaches are likely to be more adequate and produce better accuracy than models that do not use locality [12] [13].…”
Section: Introductionmentioning
confidence: 99%