2015
DOI: 10.1007/978-3-319-16354-3_38
|View full text |Cite
|
Sign up to set email alerts
|

A Study of Smoothing Methods for Relevance-Based Language Modelling of Recommender Systems

Abstract: Language Models have been traditionally used in several fields like speech recognition or document retrieval. It was only recently when their use was extended to collaborative Recommender Systems. In this field, a Language Model is estimated for each user based on the probabilities of the items. A central issue in the estimation of such Language Model is smoothing, i.e., how to adjust the maximum likelihood estimator to compensate for rating sparsity. This work is devoted to explore how the classical smoothing… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
9
0

Year Published

2015
2015
2017
2017

Publication Types

Select...
3
2
1

Relationship

3
3

Authors

Journals

citations
Cited by 7 publications
(9 citation statements)
references
References 8 publications
0
9
0
Order By: Relevance
“…When using language models for recommendation, we compute the probability of an item given a user p(i|u) by smoothing the maximum likelihood estimate (MLE) p ml (i|u) of a multinomial distribution: 17,24…”
Section: Smoothing Methodsmentioning
confidence: 99%
See 2 more Smart Citations
“…When using language models for recommendation, we compute the probability of an item given a user p(i|u) by smoothing the maximum likelihood estimate (MLE) p ml (i|u) of a multinomial distribution: 17,24…”
Section: Smoothing Methodsmentioning
confidence: 99%
“…Previous work on using relevance models for recommendation has computed neighbourhoods using k-NN algorithm with Pearson or cosine similarities. 17,24,25 In general, cosine similarity provides better results than Pearson in terms of accuracy metrics for top-N recommendation. 5 Additionally, for relevance models, this also holds.…”
Section: Neighbourhood Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…The rationale behind this decision is that this smoothing method models the user bias yielding better recommendations. The reason is that the amount of smoothing applied from the background collection p(i|C) is inversely proportional to the average rating of the user [10]:…”
Section: Relevance Modelling Of Recommender Systemsmentioning
confidence: 99%
“…Relevance-Based Language Models (RM) were conceived for expanding queries automatically [6]. However, they can be effectively applied to CF recommendation [8,2,10]. The task of recommending items to a user can be assimilated to the task of expanding a query with new terms.…”
Section: Introductionmentioning
confidence: 99%