Proceedings of the 1st International Workshop on Deep Learning Practice for High-Dimensional Sparse Data 2019
DOI: 10.1145/3326937.3341254
|View full text |Cite
|
Sign up to set email alerts
|

Attention-based mixture density recurrent networks for history-based recommendation

Abstract: The goal of personalized history-based recommendation is to automatically output a distribution over all the items given a sequence of previous purchases of a user. In this work, we present a novel approach that uses a recurrent network for summarizing the history of purchases, continuous vectors representing items for scalability, and a novel attention-based recurrent mixture density network, which outputs each component in a mixture sequentially, for modelling a multi-modal conditional distribution. We evalu… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
4
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
5
2

Relationship

1
6

Authors

Journals

citations
Cited by 7 publications
(4 citation statements)
references
References 27 publications
0
4
0
Order By: Relevance
“…It is also a difficult baseline to beat in terms of generating user engagement, given that these are items the user has engaged with recently. The works of Song et al [28] and Wang et al [29] show approaches similar to RVI to be strong baseline methods, outperforming CF-based methods.…”
Section: Offline Evaluationmentioning
confidence: 99%
“…It is also a difficult baseline to beat in terms of generating user engagement, given that these are items the user has engaged with recently. The works of Song et al [28] and Wang et al [29] show approaches similar to RVI to be strong baseline methods, outperforming CF-based methods.…”
Section: Offline Evaluationmentioning
confidence: 99%
“…Other ways to achieve the permutation invariant loss for neural networks include sequential decision making (Welleck et al 2018), mixture of experts (Yang et al 2018b;Wang, Cho, and Wen 2019), beam search (Qin et al 2019), predicting the permutation using a CNN (Rezatofighi et al 2018), Transformers (Stern et al 2019;Gu, Liu, and Cho 2019;Carion et al 2020) or reinforcement learning (Welleck et al 2019). In contrast, our goal is to efficiently predict a set of cluster centers that can well reconstruct the set of observed instances rather than directly predicting the observed instances.…”
Section: Related Workmentioning
confidence: 99%
“…Other ways to achieve the permutation invariant loss for neural networks include sequential decision making (Welleck et al 2018), mixture of experts (Yang et al 2018b;Wang, Cho, and Wen 2019), beam search (Qin et al 2019), predicting the permutation using a CNN (Rezatofighi et al 2018), Transformers (Stern et al 2019;Gu, Liu, and Cho 2019;Carion et al 2020) or reinforcement learning (Welleck et al 2019). In contrast, our goal is to efficiently predict a set of cluster centers that can well reconstruct the set of observed instances rather than directly predicting the observed instances.…”
Section: Related Workmentioning
confidence: 99%