2018
DOI: 10.48550/arxiv.1805.07037
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

MARS: Memory Attention-Aware Recommender System

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
4
0

Year Published

2018
2018
2020
2020

Publication Types

Select...
4
1

Relationship

1
4

Authors

Journals

citations
Cited by 5 publications
(4 citation statements)
references
References 29 publications
0
4
0
Order By: Relevance
“…Occasionally, we nd that results can be con icting and relative positions change very frequently. For example, the scores of NCF in [201] is relatively ranked very low as compared to the original paper that proposed the model [53]. is makes the relative benchmark of new neural models extremely challenging.…”
Section: The Field Needs Be Er More Unified and Harder Evaluationmentioning
confidence: 95%
“…Occasionally, we nd that results can be con icting and relative positions change very frequently. For example, the scores of NCF in [201] is relatively ranked very low as compared to the original paper that proposed the model [53]. is makes the relative benchmark of new neural models extremely challenging.…”
Section: The Field Needs Be Er More Unified and Harder Evaluationmentioning
confidence: 95%
“…A similar architecture is proposed to handle both textual and visual question answering tasks (Xiong, Merity, and Socher 2016), where visual inputs are fed into a deep convolutional network and high-level features are extracted and processed into an input sequence for the attention network. If we further extend memory and query representation to fields beyond question answering, memory-based attention techniques are also used in aspect and opinion term mining (Wang et al 2017) where query is represented as aspect prototypes, in recommender systems (Zheng et al 2018) where users become the memory component and items become queries, in topic modelings (Zeng et al 2018) where latent topic representation extracted from a deep network constitutes the memory, etc.…”
Section: Memory-based Attentionmentioning
confidence: 99%
“…The Facebook DLRM network (Naumov et al, 2019) mimics factorization machines more directly by passing the pairwise dot product between different embeddings into a multilayer perceptron (MLP). More sophisticated techniques that incorporate trees, memory, and (self-)attention mechanisms (to capture sequential user behavior) have also been proposed (Zheng et al, 2018;Zhou et al, 2018a;b;Zhu et al, 2018).…”
Section: Related Workmentioning
confidence: 99%