Proceedings of the 2004 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2004. CVPR 2004.
DOI: 10.1109/cvpr.2004.1315194
|View full text |Cite
|
Sign up to set email alerts
|

Multiobjective data clustering

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1

Citation Types

0
70
0
5

Publication Types

Select...
6
3

Relationship

0
9

Authors

Journals

citations
Cited by 129 publications
(77 citation statements)
references
References 4 publications
0
70
0
5
Order By: Relevance
“…For data with different types of cluster structures, other objective functions may be more appropriate [12]. On the other hand, it is also suggested that stability, which reflects the variation in the clustering solutions under perturbations should be considered in developing clustering algorithms [13].…”
Section: B Scalarized Multiobjective Learningmentioning
confidence: 99%
“…For data with different types of cluster structures, other objective functions may be more appropriate [12]. On the other hand, it is also suggested that stability, which reflects the variation in the clustering solutions under perturbations should be considered in developing clustering algorithms [13].…”
Section: B Scalarized Multiobjective Learningmentioning
confidence: 99%
“…However, they have some problems: one index is rarely equally well applicable to different types of dataset, i.e. one criterion may not fit well with data distribution when regions of the data contains clusters of diverse shapes [20]; moreover, many of these algorithms often fall into local optima and sometimes they can deliver results that are meaningless for data analysis (e.g. when is optimized only separation validity index it could yield one group with all genes).…”
Section: Introductionmentioning
confidence: 99%
“…In this case, we called Multi-Objective Clustering (MOC), and it corresponds to the simultaneous optimization of several validity index. This approach is a two-step process: (i) discovery of clusters by any clustering algorithm during the optimization of a quality index, and (ii) construction of an "optimal" set of solutions that correspond to various tradeoffs between different objectives [20]. The work of [25] uses a MOC approach incorporating a method called SiMM-TS (Significant Multi-class Membership -Two Stage).…”
Section: Introductionmentioning
confidence: 99%
“…For example, Saha et al observe "evidently one reason for the difficulty of clustering is that for many data sets no unambiguous partitioning of the data exists, or can be established, even by humans" [1]. Moreover, matters are even worse for applications that seek of non-homogeneous groupings, because "virtually all existing clustering algorithms assume a homogeneous clustering criterion over the entire feature space … because its intrinsic criterion may not fit well with the data distribution in the entire feature space" [2]; furthermore, "by focusing on just one aspect of cluster quality, most clustering algorithms … are not robust to variations in cluster shape, size, dimensionality and other characteristics" [3]. Finally, in specific application domains, users seek for clusters with similar extrinsic characteristics; their definitions of "interestingness" of clusters are usually different from those used in typical clustering algorithms.…”
Section: Introductionmentioning
confidence: 99%