1985
DOI: 10.1016/0304-3975(85)90224-5
|View full text |Cite
|
Sign up to set email alerts
|

Clustering to minimize the maximum intercluster distance

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
854
0
5

Year Published

2000
2000
2023
2023

Publication Types

Select...
4
3
1

Relationship

0
8

Authors

Journals

citations
Cited by 1,324 publications
(912 citation statements)
references
References 6 publications
0
854
0
5
Order By: Relevance
“…A hypothesis is that better pivots are far away from each other [8], [10]. A simple algorithm based on the hypothesis sequentially selects pivotp from a set of data objects X one by one with the same way as -net construction in [15], [21], as expressed bŷ…”
Section: Pivot Generation and Data Partitioning Algorithmsmentioning
confidence: 99%
“…A hypothesis is that better pivots are far away from each other [8], [10]. A simple algorithm based on the hypothesis sequentially selects pivotp from a set of data objects X one by one with the same way as -net construction in [15], [21], as expressed bŷ…”
Section: Pivot Generation and Data Partitioning Algorithmsmentioning
confidence: 99%
“…We will also investigate alternative rewritings for the cardinality clauses and methods to reduce the number of constraint clauses. Finally, we will extend the method here presented to other classification tasks that can be formalized as hard subset selection problems, as SNN [23], k-NN [13], k-center [17], CNNDD [2], and others.…”
Section: Discussionmentioning
confidence: 99%
“…Most useful classification tasks can be formulated as subset selection problems [6,17,12,26,28]. Subsets to be singled out have to posses certain properties guaranteeing that they represent a model of the whole training set, according to the specific classification rule.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…In this method n sample points are divided into k clusters, where the maximum distance of a point to a cluster center is minimized. This is equal to solving the k-center problem which is NP-hard and has a greedy solution [86]. This partitioning and the use of a new multivariate Taylor expansion dramatically reduces the cost of fast Gauss transform, but at the cost of reduced accuracy.…”
Section: Related Workmentioning
confidence: 99%