2022
DOI: 10.21203/rs.3.rs-2001665/v1
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Improved collaborative filtering algorithm using intuitionistic fuzzy C -means clustering and incorporating time factors

Abstract: Collaborative filtering (CF) has been widely used to help users discover items of interest. Most of the existing CF methods collect preference information from other users to predict user’s ratings. However, the user’s interest will change over time, and the traditional collaborative filtering recommendation does not take the change of user’s interest into consideration. In addition, there are also problems of information overload and data sparseness in applications. In order to solve the problem of data spars… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
4
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
2
1

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(4 citation statements)
references
References 32 publications
(38 reference statements)
0
4
0
Order By: Relevance
“…At present, the research on graph unlearning is still at an early stage and there are several open research questions. In future work, we would like to explore the potential of graph influence function in other applications, such as recommender systems [37,42], influential subgraph discovery, and certified data removal. Going beyond explicit unlearning requests, we will focus on the auto-repair ability of the deployed model -that is, to discover the adversarial attacks to the model and revoke their negative impact.…”
Section: Discussionmentioning
confidence: 99%
“…At present, the research on graph unlearning is still at an early stage and there are several open research questions. In future work, we would like to explore the potential of graph influence function in other applications, such as recommender systems [37,42], influential subgraph discovery, and certified data removal. Going beyond explicit unlearning requests, we will focus on the auto-repair ability of the deployed model -that is, to discover the adversarial attacks to the model and revoke their negative impact.…”
Section: Discussionmentioning
confidence: 99%
“…Generalized methods [12,21,54,56,59,66,68,69] aim to learn invariant representations against popularity bias and achieve stability and generalization ability. s-DRO [59] adopts Distributionally Robust Optimization (DRO) framework, CD 2 AN [12] disentangles item property representations from popularity under co-training networks.…”
Section: Related Workmentioning
confidence: 99%
“…COR [54] formulates feature shift as an intervention and performs counterfactual inference to mitigate the effect of out-of-date interactions. BC Loss [66] incorporates popularity bias-aware margins to achieve better generalization ability. InvCF [68] discovers disentangled representations that faithfully reveal the latent preference and popularity semantics.…”
Section: Related Workmentioning
confidence: 99%
“…It is noted that when 𝑌 𝑢,𝑖 = 0, it indicates either that the item is irrelevant to the user or that it can be a hidden-relevant item of the user [53] [25], neural networks [13,35], and graph neural networks [12,64]. To train recommender models, point-wise loss (e.g., binary cross-entropy, mean squared error), pair-wise loss (e.g., BPR [50], Margin Ranking loss [61]), and listwise loss (e.g., InfoNCE [44], Sampled Softmax [67]) have been adopted [21]. After the recommender model is trained, we produce a ranking list 𝜋 with unobserved items in I − 𝑢 by sorting the ranking scores 𝑓 𝜃 (𝑢, 𝑖) in descending order.…”
Section: Preliminaries 31 Recommendation With Implicit Feedbackmentioning
confidence: 99%