ICASSP 2020 - 2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) 2020
DOI: 10.1109/icassp40776.2020.9053071
|View full text |Cite
|
Sign up to set email alerts
|

Attentive Item2vec: Neural Attentive User Representations

Abstract: Factorization methods for recommender systems tend to represent users as a single latent vector. However, user behavior and interests may change in the context of the recommendations that are presented to the user. For example, in the case of movie recommendations, it is usually true that earlier user data is less informative than more recent data. However, it is possible that a certain early movie may become suddenly more relevant in the presence of a popular sequel movie. This is just a single example of a v… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
16
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
5
2
1

Relationship

5
3

Authors

Journals

citations
Cited by 19 publications
(16 citation statements)
references
References 12 publications
0
16
0
Order By: Relevance
“…Attentive recommendation models have been described in several earlier works such as Attentive CF [13], Neural Attentive Multiview Machines [4], DeepICF [40], and Attentive Item2vec [3]. These works model users as a set of item vectors.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…Attentive recommendation models have been described in several earlier works such as Attentive CF [13], Neural Attentive Multiview Machines [4], DeepICF [40], and Attentive Item2vec [3]. These works model users as a set of item vectors.…”
Section: Related Workmentioning
confidence: 99%
“…Items are clustered together in prioritymedoids and cover-trees are used to declaratively capture a desired balance between predicted rating and diversity. In [3], DiRec was compared to other algorithms such as Greedy and Swap [41] and showed to significantly outperformed them on several evaluation metrics.…”
Section: Evaluated Modelsmentioning
confidence: 99%
“…Transformer-based models have revolutionized the fields of natural language processing [22,33,46] and recommender systems [34,44], improving upon other neural embedding models [3, 5, 7, 8, 13-15, 17, 36, 37, 40]. Significant strides were made in tasks such as machine translation [46], sentiment analysis [49], semantic textual similarity [16,23,35], and item similarity [6,11,12,28,34]. However, transformers employ a complex attention-based architecture comprising hundreds of millions of parameters that cannot be decomposed into smaller more interpretable components.…”
Section: Introductionmentioning
confidence: 99%
“…Neural attention mechanisms are emerging techniques that affects a variety of different fields [2], [11], [19], [21], [31], [32], [33]. For instance, in [24] the authors suggest using attention mechanism for improving the nearest neighbors in each view, however, the integration between different views is set by hyperparameters.…”
Section: Introduction and Related Workmentioning
confidence: 99%