2008
DOI: 10.1007/s00357-008-9004-x
|View full text |Cite
|
Sign up to set email alerts
|

Solving Non-Uniqueness in Agglomerative Hierarchical Clustering Using Multidendrograms

Abstract: In agglomerative hierarchical clustering, pair-group methods suffer from a problem of non-uniqueness when two or more distances between different clusters coincide during the amalgamation process. The traditional approach for solving this drawback has been to take any arbitrary criterion in order to break ties between distances, which results in different hierarchical classifications depending on the criterion followed. In this article we propose a variable-group algorithm that consists in grouping more than t… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
96
0

Year Published

2011
2011
2019
2019

Publication Types

Select...
7
1

Relationship

0
8

Authors

Journals

citations
Cited by 129 publications
(96 citation statements)
references
References 12 publications
0
96
0
Order By: Relevance
“…In this section, we construct dendogram "maps" [6,16]. Our starting point will be the set of experimental SC measures handled by indices (2)-(3) and (5)- (6).…”
Section: Dendogramsmentioning
confidence: 99%
See 2 more Smart Citations
“…In this section, we construct dendogram "maps" [6,16]. Our starting point will be the set of experimental SC measures handled by indices (2)-(3) and (5)- (6).…”
Section: Dendogramsmentioning
confidence: 99%
“…Our starting point will be the set of experimental SC measures handled by indices (2)-(3) and (5)- (6). The resulting matrix C = [c jk ] is treated by the package MultiDendograms hierarchical clustering package [16], and the results are visualized in Figs.…”
Section: Dendogramsmentioning
confidence: 99%
See 1 more Smart Citation
“…Agglomerative or 'bottom up' approaches (Ward 1963, Fernandez andGomez 2008) start with each object as a cluster and recursively merges two clusters with the most similarity. Divisive or 'top down' approaches (Chavent et al 2007, Zhong 2008) start with all observations as one cluster and at each step divides the cluster with the most dissimilar observations.…”
Section: Introductionmentioning
confidence: 99%
“…Clustering has been extensively used in the fields of pattern recognition, statistics and machine learning. Typically, the most common algorithms are hierarchical clustering algorithm [1][2][3] and K-means algorithm [4]. Meanwhile, the clustering algorithm based on probability model [5,6] is becoming more and more widely applied.…”
Section: Introductionmentioning
confidence: 99%