2012
DOI: 10.1109/tsp.2012.2196696
|View full text |Cite
|
Sign up to set email alerts
|

Robust Clustering Using Outlier-Sparsity Regularization

Abstract: Notwithstanding the popularity of conventional clustering algorithms such as K-means and probabilistic clustering, their clustering results are sensitive to the presence of outliers in the data. Even a few outliers can compromise the ability of these algorithms to identify meaningful hidden structures rendering their outcome unreliable. This paper develops robust clustering algorithms that not only aim to cluster the data, but also to identify the outliers. The novel approaches rely on the infrequent presence … Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1

Citation Types

0
28
0

Year Published

2013
2013
2021
2021

Publication Types

Select...
4
3
2

Relationship

1
8

Authors

Journals

citations
Cited by 51 publications
(28 citation statements)
references
References 37 publications
(81 reference statements)
0
28
0
Order By: Relevance
“…For each s, we applied the EIMMs with K ¼ 2; 4; …; 48; 50 and adopted the number of components K when the increase in the likelihood was saturated. The resulting rate (10) and average distortion (11) were calculated for the two data sets, the Laplacian data set ( Fig. 1(a)) and the Gaussian data set ( Fig.…”
Section: Application To Rate-distortion Computationmentioning
confidence: 99%
See 1 more Smart Citation
“…For each s, we applied the EIMMs with K ¼ 2; 4; …; 48; 50 and adopted the number of components K when the increase in the likelihood was saturated. The resulting rate (10) and average distortion (11) were calculated for the two data sets, the Laplacian data set ( Fig. 1(a)) and the Gaussian data set ( Fig.…”
Section: Application To Rate-distortion Computationmentioning
confidence: 99%
“…In particular, Laplacian mixture models (LMMs) have been proposed and applied for the purposes of robust clustering and overcomplete source separation [6,14]. Among robust clustering methods [11,10], those based on LMMs provide simple learning algorithms similar to the learning of Gaussian mixture models (GMMs). However, there are two drawbacks in LMMs: (1) the degree of the robustness is uncontrollable, and (2) a cluster mean vector can inappropriately converge to a data sample, which is caused by the nature of the absolute-loss function.…”
Section: Introductionmentioning
confidence: 99%
“…As a result, the proposed algorithm will allow for (a) soft assignments of data points to clusters; (b) rating of annotators; and, (c) estimating the number of Gaussian components in the mixture model based on the algorithm developed in [7] for a Gaussian mixture only. Relative to prior works in robust clustering [8][9][10][11][12], the present contribution accounts for the variable reliability of data to be clustered, which is a distinct feature of crowdsourcing.…”
Section: Introductionmentioning
confidence: 99%
“…In [11] the clusters are recovered in a sequential manner, in contrast to [10] (and all previous algorithms), where clusters are recovered simultaneously. Other such methods are given in [12], [13].…”
Section: Introductionmentioning
confidence: 99%