2017
DOI: 10.1007/s11042-017-5121-z
|View full text |Cite
|
Sign up to set email alerts
|

Large-scale user modeling with recurrent neural networks for music discovery on multiple time scales

Abstract: The amount of content on online music streaming platforms is immense, and most users only access a tiny fraction of this content. Recommender systems are the application of choice to open up the collection to these users. Collaborative filtering has the disadvantage that it relies on explicit ratings, which are often unavailable, and generally disregards the temporal nature of music consumption. On the other hand, item co-occurrence algorithms, such as the recently introduced word2vec-based recommenders, are t… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

1
16
0

Year Published

2018
2018
2022
2022

Publication Types

Select...
4
2
2

Relationship

1
7

Authors

Journals

citations
Cited by 17 publications
(17 citation statements)
references
References 17 publications
1
16
0
Order By: Relevance
“…The major difference between the proposed work and the work by De Boom et al (2017) is the assumption on the number of modes in the distribution of user behaviors. The proposed model considers the mapping from history to a future behaviour as a probability distribution with multiple modes, unlike their work in which such a distribution is assumed to be unimodal.…”
Section: Related Work a History-based Recommender System With A Recurmentioning
confidence: 95%
“…The major difference between the proposed work and the work by De Boom et al (2017) is the assumption on the number of modes in the distribution of user behaviors. The proposed model considers the mapping from history to a future behaviour as a probability distribution with multiple modes, unlike their work in which such a distribution is assumed to be unimodal.…”
Section: Related Work a History-based Recommender System With A Recurmentioning
confidence: 95%
“…Moreover, models of cascade [16] or sequential attention [22] that consider a longer user behavior history show particularly good results. Our sequential recommender draws inspiration from [2] and shows positive performance improvements in online experiments. Multi-armed bandits for prioritizing among multiple sources is applied in different use cases.…”
Section: Related Workmentioning
confidence: 99%
“…To help the RNN model long-term dependencies and to counter the vanishing gradient problem [15], several extensions on Equation (6) have been proposed. The best known examples are long short-term memories (LSTMs) and, more recently, gated recurrent units (GRUs), which both have comparable characteristics and similar performance on a variety of tasks [4,10,16].…”
Section: Common Rnn Layersmentioning
confidence: 99%
“…Author communication [1] Word-level 4 (?) Author communication (Zoph, 2016) [43] Word-level 4 Author communication (Inan, 2016) [18] Word-level 4 Source code (Merity, 2017) [25] Word-level 4 Source code (Yang, 2017) [40] Word-level 3 Author communication (Sturm, 2016) [35] Music notes 3 Author communication (Saon, 2016) [32] Phonemes 3 Author communication (De Boom, 2017) [6] Playlist tracks 2 Paper…”
Section: Referencementioning
confidence: 99%