Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery &Amp; Data Mining 2018
DOI: 10.1145/3219819.3220086
|View full text |Cite
|
Sign up to set email alerts
|

Multi-Pointer Co-Attention Networks for Recommendation

Abstract: Many recent state-of-the-art recommender systems such as D-ATT, TransNet and DeepCoNN exploit reviews for representation learning. This paper proposes a new neural architecture for recommendation with reviews. Our model operates on a multi-hierarchical paradigm and is based on the intuition that not all reviews are created equal, i.e., only a selected few are important. The importance, however, should be dynamically inferred depending on the current target. To this end, we propose a review-by-review pointerbas… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
168
0

Year Published

2019
2019
2021
2021

Publication Types

Select...
5
2

Relationship

0
7

Authors

Journals

citations
Cited by 246 publications
(178 citation statements)
references
References 36 publications
(70 reference statements)
1
168
0
Order By: Relevance
“…The fraction of papers that were reproducible according to our relatively strict criteria per conference series are shown in Table 1. Non-reproducible: KDD: [43], RecSys: [41], [6], [38], [44], [21], [45], SIGIR: [32], [7], WWW: [42], [11] Overall, we could reproduce only about one third of the works, which confirms previous discussions about limited reproducibility, see, e.g., [3]. The sample size is too small to make reliable conclusions regarding the difference between conference series.…”
Section: Research Methods 21 Collecting Reproducible Papersmentioning
confidence: 99%
“…The fraction of papers that were reproducible according to our relatively strict criteria per conference series are shown in Table 1. Non-reproducible: KDD: [43], RecSys: [41], [6], [38], [44], [21], [45], SIGIR: [32], [7], WWW: [42], [11] Overall, we could reproduce only about one third of the works, which confirms previous discussions about limited reproducibility, see, e.g., [3]. The sample size is too small to make reliable conclusions regarding the difference between conference series.…”
Section: Research Methods 21 Collecting Reproducible Papersmentioning
confidence: 99%
“…Baselines. Here, we compare the proposed CARP against the conventional baseline and recently proposed state-of-the-art rating prediction methods: (a) probabilistic matrix factorization that leverages only rating scores, PMF [23]; (b) latent topic and shallow embedding learning models with reviews, RBLT [26] and CMLE [37]; (c) deep learning based solutions with reviews, DeepCoNN [39], D-Attn [24], TransNet [3], TARMF [19], MPCN [27] and ANR [7]. Among these methods, D-Attn, TARMF, MPCN and ANR all identify important words for rating prediction.…”
Section: Methodsmentioning
confidence: 99%
“…Recent years, capitalizing user reviews to enhance the precision and the interpretability of recommendation have been investigated and verified by many works [1,3,4,6,8,15,18,20,24,26,27,29,37,39]. In earlier days, many efforts are made to extract semantic features from reviews with the topic modeling techniques [2,14].…”
Section: Review-based Recommender Systemsmentioning
confidence: 99%
See 2 more Smart Citations