Proceedings of the Symposium on Applied Computing 2017
DOI: 10.1145/3019612.3019868
|View full text |Cite
|
Sign up to set email alerts
|

Personalised fading for stream data

Abstract: This paper describes a forgetting technique for the live update of viewer profiles based on individual sliding windows, fading and incremental matrix factorization. The individual sliding window maintains, for each viewer, a queue holding the last n viewer ratings. As new viewer events occur, they are inserted in the viewer queue, by shifting and fading the queue ratings, and the viewer latent model is faded. We explored time, rating-and-position and popularity-based fading techniques, using the latter as the … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
6
0

Year Published

2017
2017
2022
2022

Publication Types

Select...
3
3

Relationship

4
2

Authors

Journals

citations
Cited by 6 publications
(6 citation statements)
references
References 5 publications
(11 reference statements)
0
6
0
Order By: Relevance
“…The RMSE measures the error between the predicted rating and the real user rating. We calculate incrementally the overall RMSE error, i.e., whenever a new event occurs, according to the Takács [23] evaluate the recommendations accuracy. To calculate the Recall, first, we predict the ratings of all items non rated by the user, including the newly rated item.…”
Section: Evaluation Metricsmentioning
confidence: 99%
“…The RMSE measures the error between the predicted rating and the real user rating. We calculate incrementally the overall RMSE error, i.e., whenever a new event occurs, according to the Takács [23] evaluate the recommendations accuracy. To calculate the Recall, first, we predict the ratings of all items non rated by the user, including the newly rated item.…”
Section: Evaluation Metricsmentioning
confidence: 99%
“…The quality of the top N recommendations can be determined using Recall@ N metric (Nilashi, bin Ibrahim, et al, ). To complement the Recall metric, Veloso, Malheiro, Burguillo, and Foss () propose a new classification accuracy metric—Target Recall (TRecall). TRecall evaluates the recommendation accuracy in terms of its closeness to the target value, that is, the real value given by the user.…”
Section: Rating Predictionmentioning
confidence: 99%
“…In this context, we determine for each new rating event the Recall of the top 10 recommendations proposed by Cremonesi et al [6] and the Target Recall of the top 10 recommendations presented by Veloso et al [35]. Cremonesi et al [6] metric includes: (1) the prediction of the ratings of all items unseen by the user, including the newly rated item; (2) the selection of 1000 unrated items plus the newly rated item; and (3) the sorting in descending order of the predictions.…”
Section: Evaluation Metricsmentioning
confidence: 99%
“…If the newly rated item belongs to the list of the top N user predicted items, it is considered as a hit. The latter case, Veloso et al [35] use all rated items instead of just the top-rated items. The Target Recall@N (TRecall@N) evaluates the recommendations accuracy using all user ratings.…”
Section: Evaluation Metricsmentioning
confidence: 99%