Proceedings of the 2007 SIAM International Conference on Data Mining 2007
DOI: 10.1137/1.9781611972771.37
|View full text |Cite
|
Sign up to set email alerts
|

Robust, Complete, and Efficient Correlation Clustering

Abstract: Correlation clustering aims at the detection of data points that appear as hyperplanes in the data space and, thus, exhibit common correlations between different subsets of features. Recently proposed methods for correlation clustering usually suffer from several severe drawbacks including poor robustness against noise or parameter settings, incomplete results (i.e. missed clusters), poor usability due to complex input parameters, and poor scalability. In this paper, we propose the novel correlation clustering… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
46
0

Year Published

2008
2008
2019
2019

Publication Types

Select...
4
3
1

Relationship

6
2

Authors

Journals

citations
Cited by 52 publications
(46 citation statements)
references
References 17 publications
(12 reference statements)
0
46
0
Order By: Relevance
“…In our experiments we use the correlation clustering algorithm COPAC [1] to generate the correlation clusters in a preprocessing step to our method. We choose this algorithm due to its efficiency, effectivity, and robustness.…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…In our experiments we use the correlation clustering algorithm COPAC [1] to generate the correlation clusters in a preprocessing step to our method. We choose this algorithm due to its efficiency, effectivity, and robustness.…”
Section: Discussionmentioning
confidence: 99%
“…The decision for a specific clustering algorithm will also determine whether or not a data object may belong to several clusters simultaneously. In our experiments we use COPAC [1], a new correlation clustering algorithm that is shown to improve over 4C as well as ORCLUS w.r.t. efficiency, effectivity, and robustness.…”
Section: Deriving Quantitative Models Formentioning
confidence: 99%
“…The algorithms define their own heuristics to distinguish between dense and non dense space regions, but they usually rely on user defined density thresholds. Some recent examples of density based algorithms are COPAC [1], STATPC [14] and MrCC [5].…”
Section: Clusteringmentioning
confidence: 99%
“…Figure 1 illustrates the research problems and the QMAS results. Figure 1(a) is a sample satellite image from the city of Annapolis, MD, USA 1 . We decomposed it into 1, 024 (32x32) tiles, very few (4) of which were manually labeled as "City" (red), "Water" (cyan), "Urban Trees" (green) or "Forest" (black).…”
Section: Introductionmentioning
confidence: 99%
“…There are axis-parallel subspace and projected clustering approaches implemented like CLIQUE [11], PROCLUS [12], SUBCLU [13], PreDeCon [14], HiSC [15], and DiSH [16]. Furthermore, some biclustering or pattern-based clustering approaches are supported like δ-bicluster [17], FLOC [18] or p-cluster [19], and correlation clustering approaches are incorporated like ORCLUS [20], 4C [21], HiCO [22], COPAC [23], ERiC [24], and CASH [25]. The improvements on these algorithms described in [26] are also integrated in ELKI.…”
Section: Available Algorithmsmentioning
confidence: 99%