2021
DOI: 10.4018/ijitsa.2021070107
|View full text |Cite
|
Sign up to set email alerts
|

Improvement of K-Means Algorithm for Accelerated Big Data Clustering

Abstract: With the rapid development of the computer level, especially in recent years, “Internet +,” cloud platforms, etc. have been used in various industries, and various types of data have grown in large quantities. Behind these large amounts of data often contain very rich information, relying on traditional data retrieval and analysis methods, and data management models can no longer meet our needs for data acquisition and management. Therefore, data mining technology has become one of the solutions to how to quic… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
7
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
5
1

Relationship

0
6

Authors

Journals

citations
Cited by 7 publications
(7 citation statements)
references
References 24 publications
(3 reference statements)
0
7
0
Order By: Relevance
“…Therefore, the difficulty of predicting a sample as a negative sample is much smaller than the difficulty of predicting it as a positive sample. As a result, this paper adopts the focal loss method for dense object detection (Lin et al, 2017;Wu et al, 2021)…”
Section: Prediction Modelmentioning
confidence: 99%
See 1 more Smart Citation
“…Therefore, the difficulty of predicting a sample as a negative sample is much smaller than the difficulty of predicting it as a positive sample. As a result, this paper adopts the focal loss method for dense object detection (Lin et al, 2017;Wu et al, 2021)…”
Section: Prediction Modelmentioning
confidence: 99%
“…It can have better prediction results when the number of positive and negative samples in the training differs significantly. As described in the article on focal loss for dense object detection (Lin et al, 2017;Wu et al, 2021), adjusting the α parameter allows the model to focus more on positive samples with smaller sample sizes. Adjusting the γ parameter allows the model to focus more on more difficult-to-judge samples.…”
Section: Modeling and Tuningmentioning
confidence: 99%
“…Traditional data analysis methods can effectively extract the features of data of low dimension. When the data dimension is too high, the effect of these methods will be significantly reduced (Wu, et al, 2021).…”
Section: Fault Identification Model Based On Improved Dbn Network Modelmentioning
confidence: 99%
“…e COP metric measures the intracluster tightness of a class cluster in terms of the average distance from data objects within the class cluster to the class cluster centroid, and the intercluster separation of a class cluster in terms of the minimum of the maximum distance from data objects e COP index is a minimum value index, that is, the clustering algorithm has the best division effect when the index achieves the minimum value [21].…”
Section: K-means Algorithmmentioning
confidence: 99%