Algorithms for Approximation 2007
DOI: 10.1007/978-3-540-46551-5_3
|View full text |Cite
|
Sign up to set email alerts
|

Computational Intelligence in Clustering Algorithms, With Applications

Abstract: -Cluster analysis plays an important role for understanding various phenomena and exploring the nature of obtained data. A remarkable diversity of ideas, in a wide range of disciplines, has been applied to clustering research. Here, we survey clustering algorithms in computational intelligence, particularly based on neural networks and kernel-based learning. We further illustrate their applications in five real world problems.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2013
2013
2023
2023

Publication Types

Select...
4
1
1

Relationship

0
6

Authors

Journals

citations
Cited by 6 publications
(2 citation statements)
references
References 81 publications
0
2
0
Order By: Relevance
“…56 Euclidean similarity measures the magnitude (frequency) over the direction (semantic) of word vectors, and it can maximize the intercluster distance comparing to cosine similarity. 48,57 Agglomerative (bottom-up) algorithm treats each document as a singleton cluster at the beginning, then successively combines nearby pair of document clusters into one cluster. Eventually, all clusters will be merged into one big cluster, which contains all of the documents.…”
Section: Resultsmentioning
confidence: 99%
“…56 Euclidean similarity measures the magnitude (frequency) over the direction (semantic) of word vectors, and it can maximize the intercluster distance comparing to cosine similarity. 48,57 Agglomerative (bottom-up) algorithm treats each document as a singleton cluster at the beginning, then successively combines nearby pair of document clusters into one cluster. Eventually, all clusters will be merged into one big cluster, which contains all of the documents.…”
Section: Resultsmentioning
confidence: 99%
“…where C ----Number of clusters; N ----Total number of samples; M ----Total number of features; U = [u ik ] ----The degree of membership of the k-th class on the i-th sample; V = [v kj ] ----The center value of the k-th class on the j-th dimension; W =[w kj ] ----Feature weight value of the k-th class on the j-th dimension feature; x ij ----The value of the i-th sample on the j-th feature; k๐œŽ = exp(-(v kj -v oj ) 2 / ๐œŽ 2 ) ----Gaussian kernel function, which changes the original measurement method to a certain extent; ๐œŽ ----Parameter, the value of this article is 2, (๐œŽ = [2, 5, 10]) [34]; ฮณ ----Information entropy coefficient, which is used to coordinate the influence of entropy on the clustering results, the value range is (0,1); ๐œ‚ ----Reciprocal of sample data variance, such as…”
Section: Multi-objective Mathematical Modelmentioning
confidence: 99%