2021
DOI: 10.1007/s11128-021-03384-7
|View full text |Cite
|
Sign up to set email alerts
|

Quantum k-means algorithm based on Manhattan distance

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
15
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
8
1

Relationship

0
9

Authors

Journals

citations
Cited by 25 publications
(15 citation statements)
references
References 35 publications
0
15
0
Order By: Relevance
“…3 Vector representation of each tweet generated by the BERT model K-means computes the centroids and iterates until finding an optimal centroid [27]. The data points are assigned to a cluster so that the sum of the squared distance between the data points and centroid would be minimum [28]. As listed in Table 1, the t-SNE passed k-means has a higher silhouette score than the principal component analysis.…”
Section: F Web Interfacementioning
confidence: 99%
“…3 Vector representation of each tweet generated by the BERT model K-means computes the centroids and iterates until finding an optimal centroid [27]. The data points are assigned to a cluster so that the sum of the squared distance between the data points and centroid would be minimum [28]. As listed in Table 1, the t-SNE passed k-means has a higher silhouette score than the principal component analysis.…”
Section: F Web Interfacementioning
confidence: 99%
“…Additionally, the realignment method, originally derived for the quantification of entanglement, was studied in [131] for steering, this is related to the positivity of states containing steering in the sense of the PPT criterion [236,237].…”
Section: Positivity Criteriamentioning
confidence: 99%
“…It can only mine the linear relationship in the data, but not the nonlinear relationship in the data. 3. In practical applications, data labels are often missing, and it is not suitable for such situations.…”
Section: Proposed Piecewise Weighted With Hyper-class Representationmentioning
confidence: 99%
“…However, the existing similarity calculations, such as Euclidean distance [2], Manhattan distance [3] and cosine distance [4], all operate on the value of the corresponding attribute, which ignores the importance of each attribute [5]. Because there are redundant attributes in the data and different attributes have different weights.…”
Section: Introductionmentioning
confidence: 99%