Tenth IEEE International Conference on Computer Vision (ICCV'05) Volume 1 2005
DOI: 10.1109/iccv.2005.27
|View full text |Cite
|
Sign up to set email alerts
|

A unifying approach to hard and probabilistic clustering

Abstract: Abstract

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

2
113
0

Year Published

2006
2006
2021
2021

Publication Types

Select...
4
2
2

Relationship

0
8

Authors

Journals

citations
Cited by 115 publications
(115 citation statements)
references
References 15 publications
2
113
0
Order By: Relevance
“…Finally, we can ascribe to preference analysis also all the approaches based on higher order clustering [19,20,21,22], where higher order similarity tensors are defined between n-tuple of points as the probability of points to be clustered together measured in terms of residual errors with respect to provisional models. In this way preferences give rise to a hypergraph whose hyperedges encode the existence of a structure able to explain the incident vertices.…”
Section: Preference Analysismentioning
confidence: 99%
“…Finally, we can ascribe to preference analysis also all the approaches based on higher order clustering [19,20,21,22], where higher order similarity tensors are defined between n-tuple of points as the probability of points to be clustered together measured in terms of residual errors with respect to provisional models. In this way preferences give rise to a hypergraph whose hyperedges encode the existence of a structure able to explain the incident vertices.…”
Section: Preference Analysismentioning
confidence: 99%
“…There are several popular ways [12] to construct the affinity matrix, such as the k-nearest neighbor graph and the fully connected graph. The normalization of the affinity matrix is achieved by finding the closest doubly stochastic matrix to the affinity matrix under a certain error measure [18]- [20], while the simple clustering algorithm (e.g. k-means) is used to partition an embedded coordinate system (formed by the principal k eigenvectors of the normalized affinity matrix) in an easier and simpler way.…”
Section: Construction Of the Affinity Matrixmentioning
confidence: 99%
“…In [18], [19], it has been shown that the key difference between Ratio-cut and Normalized-cut is the error measure used to find the closest doubly stochastic approximation of the input affinity matrix during the normalization step. When repeated, the Normalized-cut process converges to the global optimal solution under the relative entropy measure (also called the Kullback-Leibler divergence), while the L 1 normalization leads to the Ratio-cut clustering.…”
Section: Construction Of the Affinity Matrixmentioning
confidence: 99%
See 1 more Smart Citation
“…Complex data are handled well by the third class of algorithms (which has made them the state-of-the-art in image segmentation) that consists of the many variants of spectral factorization [8,14,17,15,21,12]. These algorithms do not make make strong assumptions on the shape of clusters, and thus generally perform better on images.…”
Section: Introductionmentioning
confidence: 99%