2001
DOI: 10.1007/s00453-001-0010-1
|View full text |Cite
|
Sign up to set email alerts
|

Robust Distance-Based Clustering with Applications to Spatial Data Mining

Abstract: In this paper, we present a method for clustering geo-referenced data suitable for applications in spatial data mining, based on the medoid method. The medoid method is related to k-Means, with the restriction that cluster representatives be chosen from among the data elements. Although the medoid method in general produces clusters of high quality, especially in the presence of noise, it is often criticized for the Ω(n 2) time that it requires. Our method incorporates both proximity and density information to… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
39
0

Year Published

2003
2003
2019
2019

Publication Types

Select...
5
1

Relationship

0
6

Authors

Journals

citations
Cited by 32 publications
(39 citation statements)
references
References 70 publications
(87 reference statements)
0
39
0
Order By: Relevance
“…Moreover, the method suffers from severe limitations when clustering large spatial dataset [5] due to the complexity of computing distance between medoid points representing each pair of clusters. These efficiency drawbacks are partially alleviated when adopting both proximity and density information to achieve high quality spatial clusters in a sub-quadratic time without requiring the user to a-priori specify the number of clusters [7]. Similarly, DBSCAN [6] exploits density information to efficiently detect clusters of arbitrary shape from point spatial data with noise.…”
Section: Background and Motivationmentioning
confidence: 99%
See 1 more Smart Citation
“…Moreover, the method suffers from severe limitations when clustering large spatial dataset [5] due to the complexity of computing distance between medoid points representing each pair of clusters. These efficiency drawbacks are partially alleviated when adopting both proximity and density information to achieve high quality spatial clusters in a sub-quadratic time without requiring the user to a-priori specify the number of clusters [7]. Similarly, DBSCAN [6] exploits density information to efficiently detect clusters of arbitrary shape from point spatial data with noise.…”
Section: Background and Motivationmentioning
confidence: 99%
“…length(X 2 ) = [7,10] which differ only in the value of a single selector (length), the first-order clause obtained by generalizing the pairs of comparable selectors in both H 1 and H 2 is: Example 3: Let us consider T C that is the set of first-order clauses including:…”
Section: Examplementioning
confidence: 99%
“…Optimization in the K-means sense as commonly used 10,12,13 would require that the weighted mean center of all straw assigned to a specifi c plant site be identical to the SSAO site was that the variance in straw assignment per site was 55%…”
Section: Validation Of Ssao and Comparison With K-meansmentioning
confidence: 99%
“…In the fi rst case, spatial clustering algorithms such as the relatively simple K-means method 10,11 and its more elaborate modifi cations 12,13 produce facility siting solutions that are locally optimized and oft en close to globally optimized, 14 within constraints imposed by practical limitations on computational time. Th e general goal of such optimizations is maximization of the area covered by service facilities within prescribed limits on response time 15 or minimization of the total cost of providing service.…”
mentioning
confidence: 99%
“…OUTPUT: the detected preference or that no preference was found Complexity: If n denotes the number of tuples in the log relation and k is the number of different values, the k-means clustering needs O(k 2 ) [2] leading to the overall complexity O(n + k 2 ). Typically, we have k << n and with it the complexity O(n).…”
Section: Algorithm 1: Miner For Categorical Preferences In Static Dommentioning
confidence: 99%