Materials, Computer Engineering and Education Technology 2021
DOI: 10.4028/www.scientific.net/ast.105.309
|View full text |Cite
|
Sign up to set email alerts
|

Time-Weighted Collaborative Filtering Algorithm Based on Improved Mini Batch K-Means Clustering

Abstract: The traditional collaborative filtering recommendation algorithm has the defects of sparse score matrix, weak scalability and user interest deviation, which lead to the low efficiency of algorithm and low accuracy of score prediction. Aiming at the above problems, this paper proposed a time-weighted collaborative filtering algorithm based on improved Mini Batch K-Means clustering. Firstly, the algorithm selected the Pearson correlation coefficient to improve the Mini Batch K-Means clustering, and used the impr… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
23
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
2
1

Relationship

0
8

Authors

Journals

citations
Cited by 26 publications
(23 citation statements)
references
References 4 publications
0
23
0
Order By: Relevance
“…( 2),k(x i ) represents the degree of nodex i anda il is an adjacency matrix ofn * m, whose expression is given in Eq. (3).…”
Section: Optimization Of Network Inference Algorithmsmentioning
confidence: 99%
See 1 more Smart Citation
“…( 2),k(x i ) represents the degree of nodex i anda il is an adjacency matrix ofn * m, whose expression is given in Eq. (3).…”
Section: Optimization Of Network Inference Algorithmsmentioning
confidence: 99%
“…However, the performance of recommendation systems needs to be improved urgently because most of the existing recommendation systems use collaborative ltering and the accuracy of recommendation systems is not high (Zheng G et al 2020) [2]. In recent years, network inference algorithms have started to become popular in the eld of recommendation due to their higher accuracy compared to collaborative ltering techniques (Han X et al 2021) [3]. Traditional network inference algorithms are also used in consumer recommendation systems, but even the performance of recommendation systems that apply traditional network inference algorithms does not reach the expected results (Fukushima A et al 2018) [4].…”
Section: Introductionmentioning
confidence: 99%
“…is is the standard optimization method recently adopted to train deep neural networks, and it is easy to implement [22]. At each iteration, the objective function is run on a subset of the training set, which is called the mini-batch.…”
Section: Features Of the Lstm Modelmentioning
confidence: 99%
“…Han et al subdivided the collaborative filtering algorithm into two types: one type is the collaborative filtering algorithm based on high similarity, and the other is the collaborative filtering algorithm based on the data model. The collaborative filtering method based on the high similarity of neighbors focused on analyzing the relationship between users and entities; the model-based collaborative filtering algorithm learned the prediction model through the user's rating of the entity or other behaviors, built the feature association between the user and the entity, and recommended the entity with high user preference to the users [2]. Many scientists had studied collaborative filtering algorithms based on data models, such as Ngaffo AN's research on matrix factorization factor models.…”
Section: Introductionmentioning
confidence: 99%
“…The advantage of overlapping techniques is to allow users to feedback their behavior and ratings in social networks, belonging to multiple clusters simultaneously [7]. (2) The problem of sparsity: Chen et al combines the evaluation of individual cognitive behaviors, user cognitive relationships, and time decay coefficients into a probability matrix decomposed by a single model, combined with the social interaction coefficient for personalized recommendation, for the use of social interaction factor for personalized recommendations [8]. Rodpysh et al pointed out that the reason for the sparse data is that most users have a very limited number of reviews for the product.…”
Section: Introductionmentioning
confidence: 99%