2020
DOI: 10.1088/1757-899x/981/2/022071
|View full text |Cite
|
Sign up to set email alerts
|

The comparative study on agglomerative hierarchical clustering using numerical data

Abstract: In theconventional way of convert data into a singleton or merging has many drawbacks mainly computational complexity. In this context hierarchical clustering method for quantitative measures of similarity among objects that could keep not only the structure of categorical attributes but also relative distance of numeric values. For numeric data the number of clusters can be validated through integral data, the hierarchical and partitioning methods the relationships among categorical items. In This Paper we he… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
4
3

Relationship

0
7

Authors

Journals

citations
Cited by 9 publications
(3 citation statements)
references
References 7 publications
0
3
0
Order By: Relevance
“…Hierarchical clustering can be either agglomerative (bottom-up), where smaller clusters are merged into larger clusters, or divisive (top-down), where larger clusters are split into smaller clusters it uses different linkage methods to measure the distance between two sub-clusters of data points. The most common linkage types are: Single, Complete and Average [34].…”
Section: Ngo [32]mentioning
confidence: 99%
“…Hierarchical clustering can be either agglomerative (bottom-up), where smaller clusters are merged into larger clusters, or divisive (top-down), where larger clusters are split into smaller clusters it uses different linkage methods to measure the distance between two sub-clusters of data points. The most common linkage types are: Single, Complete and Average [34].…”
Section: Ngo [32]mentioning
confidence: 99%
“…e improved condensed hierarchical clustering algorithm proposed in this paper is utilized to cluster text information, and the central point of each cluster is figured out. e calculation formula is shown in formula (22) [23][24][25][26][27]:…”
Section: Name Disambiguation and Alumni Identificationmentioning
confidence: 99%
“…In contrast to classification, clustering describes unsupervised analysis approaches, which focus on the assembly process of data to achieve automatically defined homogenous groups by identifying statistical structures and patterns (Dayan 1999;Ahuja and Dubey 2017). Clustering approaches like k-means (MacQueen 1967; Orkphol and Yang 2019), expectation maximization (Dempster et al 1977;Shelke et al 2017) and agglomerative hierarchical clustering (Tan et al 2005;Praveen et al 2020) renounce a reduction of dimensionality and try to group matching elements of the dataset based on their structure (Feldman and Sanger 2007;Heyer et al 2006;AL-Sharuee et al 2018). The resulting clusters are derived directly from the structure of the data themselves (Feldman and Sanger 2007;Heyer et al 2006).…”
Section: Clusteringmentioning
confidence: 99%