Proceedings of the 2017 ACM on Conference on Information and Knowledge Management 2017
DOI: 10.1145/3132847.3132892
|View full text |Cite
|
Sign up to set email alerts
|

Joint Representation Learning for Top-N Recommendation with Heterogeneous Information Sources

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

5
199
0

Year Published

2018
2018
2022
2022

Publication Types

Select...
4
3
1

Relationship

1
7

Authors

Journals

citations
Cited by 275 publications
(215 citation statements)
references
References 36 publications
5
199
0
Order By: Relevance
“…Finding an explanation path, however, is difficult for an arbitrary (u, i) pair. Because we only observe a limited number of relationship triples in the training data, the knowledge graph built on product data usually is sparse [6,69,71] . In most cases, it is impossible to find two sets of relationships {r k u } and {r j i } that directly link the user u to the item i.…”
Section: Extraction Algorithmmentioning
confidence: 99%
“…Finding an explanation path, however, is difficult for an arbitrary (u, i) pair. Because we only observe a limited number of relationship triples in the training data, the knowledge graph built on product data usually is sparse [6,69,71] . In most cases, it is impossible to find two sets of relationships {r k u } and {r j i } that directly link the user u to the item i.…”
Section: Extraction Algorithmmentioning
confidence: 99%
“…As the goal of recommender system is to provide the target user with a ranking list of top few items, which are most likely to be preferred by the user, recommendation is rather a ranking problem instead of rating. In light of this, MF moves to model the relative preferences between different items and the pairwise learning approach has been widely adopted to achieve the goal [35,54]. In pairwise learning, the user and item vectors are learnt by settingr u,i >r u,k for any two pairs that satisfy (u, i) ∈ R and (u, k) R. A typical example is the bayesian personalized ranking (BPR) [35].…”
Section: Our Proposed Model 31 Notations and Backgroundmentioning
confidence: 99%
“…For each user-item (u, i) pair, our model computes a weight vector a u,i ∈ R f to indicate the importance of i's aspects for u. In addition, the side information of items is exploited to estimate the weight vector, as side information conveys rich features of items, especially text reviews and item images, which are wellrecognized to provide notable and complementary features of items in different aspects [9,54]. We adopt the recent advancement of attention mechanism [6,10] to estimate the attention vector.…”
Section: Multimodal Attentive Metric Learningmentioning
confidence: 99%
See 1 more Smart Citation
“…Focused learning technique [32] improves the recommendations in terms of prediction accuracy for the cold-start items based on the objective of the customized matrix factorization and hyper parameter optimization. A Joint Representation Learning (JRL) framework [33] enhances the top-N recommendation with several heterogeneous information sources such as review text, numerical rating, product image, and so on. It can learn a complex prediction network for fast online calculation in the recommendation model.…”
Section: Ranking Based Recommender Systemsmentioning
confidence: 99%