2021
DOI: 10.1117/1.jei.30.6.063029
|View full text |Cite
|
Sign up to set email alerts
|

k-Means image segmentation using Mumford–Shah model

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
2
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
2

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(6 citation statements)
references
References 0 publications
0
2
0
Order By: Relevance
“…From Table 3, the k-means ++ and k-means2o simultaneously obtained the maximum FMI value for 8 of the 15 datasets. is shows that the two algorithms have the same performance, and further performance comparison and (5) End for (6) For j � 1 to max iter do (7) for ∀x ∈ S core do (8) According to the principle of the nearest distance between x and C K , classify x into the corresponding cluster (9) end for (10) if SSE does not change then (11) break (12) end if (13) analysis of other evaluation indicators are required. From the view of ARI in Table 3, the most significant and direct conclusion is that the k-means2o outperforms the k-means ++ on most datasets, and the performance of the two algorithms is also very close on a few datasets that are inferior to k-means ++.…”
Section: Experimental Results Andmentioning
confidence: 99%
See 1 more Smart Citation
“…From Table 3, the k-means ++ and k-means2o simultaneously obtained the maximum FMI value for 8 of the 15 datasets. is shows that the two algorithms have the same performance, and further performance comparison and (5) End for (6) For j � 1 to max iter do (7) for ∀x ∈ S core do (8) According to the principle of the nearest distance between x and C K , classify x into the corresponding cluster (9) end for (10) if SSE does not change then (11) break (12) end if (13) analysis of other evaluation indicators are required. From the view of ARI in Table 3, the most significant and direct conclusion is that the k-means2o outperforms the k-means ++ on most datasets, and the performance of the two algorithms is also very close on a few datasets that are inferior to k-means ++.…”
Section: Experimental Results Andmentioning
confidence: 99%
“…However, as one of the most classic clustering algorithm, the k-means aimed to partition the given dataset into K subsets so as to minimize the within-cluster sum of squared distances continues to be one of the most popular clustering algorithms [ 7 ]. Its efficiency and simplicity of implementation make it successfully applied in various fields, such as image [ 8 , 9 ], education [ 10 ], bioinformatics [ 11 ], medical [ 12 ], partial multiview data [ 13 ], agricultural data [ 14 ], fuzzy decision-making [ 15 ].…”
Section: Introductionmentioning
confidence: 99%
“…In work [16] it is proposed to segment images using the k-means algorithm using the Mumford-Shah model. The essence of the approach [16] consists in the application of the k-means algorithm for the purpose of color quantization in color models of image representation.…”
Section: Iterature Review and Problem Statementmentioning
confidence: 99%
“…In work [16] it is proposed to segment images using the k-means algorithm using the Mumford-Shah model. The essence of the approach [16] consists in the application of the k-means algorithm for the purpose of color quantization in color models of image representation. However, the pixels are grouped by clusters only in the color space without taking into account the connectivity between them, which, in turn, leads to the fragmentation of segments.…”
Section: Iterature Review and Problem Statementmentioning
confidence: 99%
See 1 more Smart Citation