Abstract:Resumo Métodos aproximativos são muito utilizados na resolução de problemas computacionais complexos, pois são capazes de produzir resultados significativos em um tempo satisfatório. Sendo o problema de agrupamento automático NP-difícil, métodos não-exatos que possuem uma complexidade tratável são desejáveis. Por essa razão, nesse trabalho é apresentada uma metaheurística de inteligência coletiva, inspirada no comportamento das formigas para resolver problemas de agrupamento de dados. O algoritmo implementado … Show more
“…The following tables present a comparison between the algorithm presented in this work and some literature proposals, as follows: AECBL1, MRDBSCAN, AK-means, and ACO (Cruz, 2010, Semaan et al, 2012, Kettani et al, 2015, Pacheco et al, 2017. To compare to others methods, it is necessary to normally execute the algorithm, find the solution and use the Silhouette index, since most of the methods in the literature use this index to show their results.…”
Section: Computational Experimentsmentioning
confidence: 99%
“…The optimality was measured by Calinski-Harabasz index (CHI). Finally, Pacheco et al (2017) presented an algorithm based on a proposal inspired by ants behavior to solve data clustering problems. The ACO algorithm performed its experiments with the SI evaluation function.…”
Data clustering is a technique that aims to represent a dataset in clusters according to their similarities. In clustering algorithms, it is usually assumed that the number of clusters is known. Unfortunately, the optimal number of clusters is unknown for many applications. This kind of problem is called Automatic Clustering. There are several cluster validity indexes for evaluating solutions, it is known that the quality of a result is influenced by the chosen function. From this, a genetic algorithm is described in this article for the resolution of the automatic clustering using the Calinski-Harabasz Index as a form of evaluation. Comparisons of the results with other algorithms in the literature are also presented. In a first analysis, fitness values equivalent or higher are found in at least 58% of cases for each comparison. Our algorithm can also find the correct number of clusters or close values in 33 cases out of 48. In another comparison, some fitness values are lower, even with the correct number of clusters, but graphically the partitioning are adequate. Thus, it is observed that our proposal is justified and improvements can be studied for cases where the correct number of clusters is not found.
“…The following tables present a comparison between the algorithm presented in this work and some literature proposals, as follows: AECBL1, MRDBSCAN, AK-means, and ACO (Cruz, 2010, Semaan et al, 2012, Kettani et al, 2015, Pacheco et al, 2017. To compare to others methods, it is necessary to normally execute the algorithm, find the solution and use the Silhouette index, since most of the methods in the literature use this index to show their results.…”
Section: Computational Experimentsmentioning
confidence: 99%
“…The optimality was measured by Calinski-Harabasz index (CHI). Finally, Pacheco et al (2017) presented an algorithm based on a proposal inspired by ants behavior to solve data clustering problems. The ACO algorithm performed its experiments with the SI evaluation function.…”
Data clustering is a technique that aims to represent a dataset in clusters according to their similarities. In clustering algorithms, it is usually assumed that the number of clusters is known. Unfortunately, the optimal number of clusters is unknown for many applications. This kind of problem is called Automatic Clustering. There are several cluster validity indexes for evaluating solutions, it is known that the quality of a result is influenced by the chosen function. From this, a genetic algorithm is described in this article for the resolution of the automatic clustering using the Calinski-Harabasz Index as a form of evaluation. Comparisons of the results with other algorithms in the literature are also presented. In a first analysis, fitness values equivalent or higher are found in at least 58% of cases for each comparison. Our algorithm can also find the correct number of clusters or close values in 33 cases out of 48. In another comparison, some fitness values are lower, even with the correct number of clusters, but graphically the partitioning are adequate. Thus, it is observed that our proposal is justified and improvements can be studied for cases where the correct number of clusters is not found.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.