2014
DOI: 10.1145/2623372
|View full text |Cite
|
Sign up to set email alerts
|

Exploration in Interactive Personalized Music Recommendation

Abstract: Current music recommender systems typically act in a greedy manner by recommending songs with the highest user ratings. Greedy recommendation, however, is suboptimal over the long term: it does not actively gather information on user preferences and fails to recommend novel songs that are potentially interesting. A successful recommender system must balance the needs to explore user preferences and to exploit this information for recommendation. This article presents a new approach to music recommendation by f… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

1
38
0
6

Year Published

2015
2015
2020
2020

Publication Types

Select...
5
3

Relationship

0
8

Authors

Journals

citations
Cited by 80 publications
(51 citation statements)
references
References 41 publications
(53 reference statements)
1
38
0
6
Order By: Relevance
“…What's more, the advanced recommendation approaches start to apply reinforcement learning [8] to the recommendation process, and consider the recommendation task as a decision problem to provide more accurate recommendations. Wang et al [11] proposed a reinforcement learning framework based on Bayesian model to balance the exploration and exploitation of users' preferences for recommendation. To learn user preferences, it uses a Bayesian model that accounts for both audio content and the novelty of recommendations.…”
Section: Related Workmentioning
confidence: 99%
“…What's more, the advanced recommendation approaches start to apply reinforcement learning [8] to the recommendation process, and consider the recommendation task as a decision problem to provide more accurate recommendations. Wang et al [11] proposed a reinforcement learning framework based on Bayesian model to balance the exploration and exploitation of users' preferences for recommendation. To learn user preferences, it uses a Bayesian model that accounts for both audio content and the novelty of recommendations.…”
Section: Related Workmentioning
confidence: 99%
“…Exploitation-Exploration of items is also a hot topic in RS eld [5,10,17,19,22,24,35,43]. Exploring more items can introduce more diversity in recommendation results.…”
Section: Related Workmentioning
confidence: 99%
“…2.2.1 Contextual Multi-Armed Bandit models. A group of work [5,7,23,40,44,50] begin to formulate the problem as a Contextual Multi-Armed Bandit (MAB) problem, where the context contains user and item features. [23] assumes the expected reward is a linear function of the context.…”
Section: Reinforcement Learning In Recommendationmentioning
confidence: 99%