Abstract:Color quantization is an important operation with many applications in graphics and image processing. Most quantization methods are essentially based on data clustering algorithms. Recent studies have demonstrated the effectiveness of hard c-means (k-means) clustering algorithm in this domain. Other studies reported similar findings pertaining to the fuzzy c-means algorithm. Interestingly, none of these studies directly compared the two types of c-means algorithms. In this study, we implement fast and exact va… Show more
“…However, running multiple times is the standard practice for K-means to produce reliable results. Furthermore, for fuzzy c-means clustering, [13] illustrates that it will take longer time than k-means (hard c-means), yet the performance are not significantly better. Thus, fuzzy c-means quantization is not included in the experiments.…”
Section: Resultsmentioning
confidence: 99%
“…Similarly, authors of [12] designed an adaptive clustering method specifically fitting their quantization technique. It is noticeable that the term 'k-means quantization' is sometimes referred as 'hard c-means quantization' in related research [13], and a similar concept 'fuzzy c-means quantization' refers to a variation of the method that assign a 'fuzzy partition/membership parameter' to each data point [14]. A key difference between fuzzy c-means and k-means quantization methods is that during the clustering procedure, each data point will contribute to the update of every cluster in the former, while will only affect one specific cluster in the latter.…”
Section: Related Workmentioning
confidence: 99%
“…Equation 13 is similar with Elastic Net [42], but with a negative l 2 coefficient. The intuition behind this scheme is that l 1 optimization often leads to α values with small quantities before it could reach 0.…”
Section: Convergence Of the Optimization Targetmentioning
Quantization can be used to form new vectors/matrices with shared values close to the original. In recent years, the popularity of scalar quantization for value-sharing application has been soaring as it has been found huge utilities in reducing the complexity of neural networks. Existing clusteringbased quantization techniques, while being well-developed, have multiple drawbacks including the dependency of the random seed, empty or out-of-the-range clusters, and high time complexity for large number of clusters. To overcome these problems, in this paper, the problem of scalar quantization is examined from a new perspective, namely sparse least square optimization. Specifically, inspired by the property of sparse least square regression, several quantization algorithms based on l 1 least square are proposed. In addition, similar schemes with l 1 + l 2 and l 0 regularization are proposed. Furthermore, to compute quantization results with given amount of values/clusters, this paper designed an iterative method and a clustering-based method, and both of them are built on sparse least square. The paper shows that the latter method is mathematically equivalent to an improved version of k-means clustering-based quantization algorithm, although the two algorithms originated from different intuitions. The algorithms proposed were tested with three types of data and their computational performances, including information loss, time consumption, and the distribution of the values of the sparse vectors, were compared and analyzed. The paper offers a new perspective to probe the area of quantization, and the algorithms proposed can outperform existing methods especially under some bit-width reduction scenarios, when the required post-quantization resolution (number of values) is not significantly lower than the original number.
“…However, running multiple times is the standard practice for K-means to produce reliable results. Furthermore, for fuzzy c-means clustering, [13] illustrates that it will take longer time than k-means (hard c-means), yet the performance are not significantly better. Thus, fuzzy c-means quantization is not included in the experiments.…”
Section: Resultsmentioning
confidence: 99%
“…Similarly, authors of [12] designed an adaptive clustering method specifically fitting their quantization technique. It is noticeable that the term 'k-means quantization' is sometimes referred as 'hard c-means quantization' in related research [13], and a similar concept 'fuzzy c-means quantization' refers to a variation of the method that assign a 'fuzzy partition/membership parameter' to each data point [14]. A key difference between fuzzy c-means and k-means quantization methods is that during the clustering procedure, each data point will contribute to the update of every cluster in the former, while will only affect one specific cluster in the latter.…”
Section: Related Workmentioning
confidence: 99%
“…Equation 13 is similar with Elastic Net [42], but with a negative l 2 coefficient. The intuition behind this scheme is that l 1 optimization often leads to α values with small quantities before it could reach 0.…”
Section: Convergence Of the Optimization Targetmentioning
Quantization can be used to form new vectors/matrices with shared values close to the original. In recent years, the popularity of scalar quantization for value-sharing application has been soaring as it has been found huge utilities in reducing the complexity of neural networks. Existing clusteringbased quantization techniques, while being well-developed, have multiple drawbacks including the dependency of the random seed, empty or out-of-the-range clusters, and high time complexity for large number of clusters. To overcome these problems, in this paper, the problem of scalar quantization is examined from a new perspective, namely sparse least square optimization. Specifically, inspired by the property of sparse least square regression, several quantization algorithms based on l 1 least square are proposed. In addition, similar schemes with l 1 + l 2 and l 0 regularization are proposed. Furthermore, to compute quantization results with given amount of values/clusters, this paper designed an iterative method and a clustering-based method, and both of them are built on sparse least square. The paper shows that the latter method is mathematically equivalent to an improved version of k-means clustering-based quantization algorithm, although the two algorithms originated from different intuitions. The algorithms proposed were tested with three types of data and their computational performances, including information loss, time consumption, and the distribution of the values of the sparse vectors, were compared and analyzed. The paper offers a new perspective to probe the area of quantization, and the algorithms proposed can outperform existing methods especially under some bit-width reduction scenarios, when the required post-quantization resolution (number of values) is not significantly lower than the original number.
“…Enactment evaluation of the proposed approach was approved on medical MRI images from altered modalities. The author [11] examined the enactments FCM, k-Means, C-Means. Both detachment measures such as Euclidean (ED) and Manhattan (MH) are used to note how these distance measures are influence the complete clustering enactment.…”
Therapeutic MR image segmentation is difficult in medical image processing. There are huge of issue are come about in the actual world medical images.
In this research paper, gives method bias field estimation based fuzzy clustering technique. Scan corrupted and saltand-paper noise using Bias field estimation. Easy and simple to classify a given medical image database over a certain number of cluster fixed a-priori technique. In this research article, segmentation and Bias field estimation of brain MR images and involved the fuzzy clustering algorithm. In new improved technique evaluates the ability of Fuzzy c-Mean to segment White and Gary matter. It delivers extra prospective for efficiently segmenting MRI data and time consuming. The Gaussian weights is explore the delivery of the feature vectors in the scan image clusters. The empirical evaluation UFCA and fuzzy clustering, with Bias field estimation is achieved.
“…Object is allocated to only the cluster with which it has the greatest level of similarity [60]. K-means (Hard C means) is an important and well known hard clustering technique.…”
Recommender systems have the ability to filter unseen information for predicting whether a particular user would prefer a given item when making a choice. Over the years, this process has been dependent on robust applications of data mining and machine learning techniques, which are known to have scalability issues when being applied for recommender systems. In this paper, we propose a k-means clustering-based recommendation algorithm, which addresses the scalability issues associated with traditional recommender systems. An issue with traditional k-means clustering algorithms is that they choose the initial k centroid randomly, which leads to inaccurate recommendations and increased cost for offline training of clusters. The work in this paper highlights how centroid selection in k-means based recommender systems can improve performance as well as being cost saving. The proposed centroid selection method has the ability to exploit underlying data correlation structures, which has been proven to exhibit superior accuracy and performance in comparison to the traditional centroid selection strategies, which choose centroids randomly. The proposed approach has been validated with an extensive set of experiments based on five different datasets (from movies, books, and music domain). These experiments prove that the proposed approach provides a better quality cluster and converges quicker than existing approaches, which in turn improves accuracy of the recommendation provided.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.