Proceedings of the Twelfth ACM International Conference on Web Search and Data Mining 2019
DOI: 10.1145/3289600.3290972
|View full text |Cite
|
Sign up to set email alerts
|

Taxonomy-Aware Multi-Hop Reasoning Networks for Sequential Recommendation

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
35
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
3
3
2

Relationship

1
7

Authors

Journals

citations
Cited by 62 publications
(35 citation statements)
references
References 43 publications
0
35
0
Order By: Relevance
“…However, user-item interaction data is usually sparse. To tackle the data sparsity problem, many techniques have been developed by utilizing the side information of items, such as review and taxonomy data [1,9,30]. As a comparison, CRS mainly focuses on the recommendation setting through conversation instead of historical interaction data.…”
Section: Related Workmentioning
confidence: 99%
“…However, user-item interaction data is usually sparse. To tackle the data sparsity problem, many techniques have been developed by utilizing the side information of items, such as review and taxonomy data [1,9,30]. As a comparison, CRS mainly focuses on the recommendation setting through conversation instead of historical interaction data.…”
Section: Related Workmentioning
confidence: 99%
“…Chen et al [3] introduces the memory mechanism to sequential recommender systems, which designs a user memory-augmented neural network (MANN) to express feature-level interests. As for more fine-grained user preference, Huang et al [8,9] use knowledge base information to enhance the semantic representation of key-value memory network called knowledge enhanced sequential recommender. However, the extra storage, manual feature design and computation of memory network of these methods cannot be accepted in industry on account of the large-scale users and items.…”
Section: Sequence-aware Recommendationmentioning
confidence: 99%
“…We choose the last hidden output h u t of LSTM as the query vector in multi-head attention to get the weights attending to [h u 1 , ..., h u t ]. The weight vector is also the tth row vector of A u i in Equation 8. head 1 and head 2 mainly concentrate on the first several items in the session, which are white down jackets.…”
Section: The Effect Of Multi-head Attentionmentioning
confidence: 99%
“…Taxonomies, the tree-structured hierarchies that represent the hypernymy (Is-A) relations, have been widely used in different domains, such as information extraction [5], question answering [35], and recommender systems [9], for the organization of concepts and instances as well as the injection of structured knowledge in downstream tasks. In particular, online catalog taxonomies serve Figure 1: The most relevant taxonomy nodes are shown on the left when a user searches "k cups" on Amazon.com.…”
Section: Introductionmentioning
confidence: 99%
“…In particular, online catalog taxonomies serve Figure 1: The most relevant taxonomy nodes are shown on the left when a user searches "k cups" on Amazon.com. as a building block of e-commerce websites (e.g., Amazon.com) and business directory services (e.g., Yelp.com) for both customer-facing and internal applications, such as query understanding, item categorization [18], browsing, recommendation [9], and search [33]. Fig.…”
Section: Introductionmentioning
confidence: 99%