Proceedings of the 13th International Conference on Web Search and Data Mining 2020
DOI: 10.1145/3336191.3371840
|View full text |Cite
|
Sign up to set email alerts
|

Sequential Modeling of Hierarchical User Intention and Preference for Next-item Recommendation

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
3
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
3
3
3

Relationship

1
8

Authors

Journals

citations
Cited by 35 publications
(4 citation statements)
references
References 28 publications
0
3
0
Order By: Relevance
“…Besides, researchers exploited hierarchical attention networks to learn better short term user preference with featurelevel attention and item level attention [223]. For the long time user interest modeling, researchers proposed to leverage nearby sessions [227], designed attention modeling or memory addressing techniques to find related sessions [229], [236], [237], [249].…”
Section: Temporal and Session Based Recommendationmentioning
confidence: 99%
“…Besides, researchers exploited hierarchical attention networks to learn better short term user preference with featurelevel attention and item level attention [223]. For the long time user interest modeling, researchers proposed to leverage nearby sessions [227], designed attention modeling or memory addressing techniques to find related sessions [229], [236], [237], [249].…”
Section: Temporal and Session Based Recommendationmentioning
confidence: 99%
“…The model needs extra information about user's behaviors and thus does not fit for our problem. The most related work to ours is [35]. In this work, the authors model the session recommendation issue as a a hierarchical decision-making process, i.e., user first chooses an intention, and then clicks items conditionally relied on the previous clicked item.…”
Section: Related Workmentioning
confidence: 99%
“…Specifically, in the convolutional layer, we use one-dimensional convolution kernels of sizes 2, 3, 4, and 5. Each size uses 32 filters, so after pooling, 128-dimensional features will be obtained [15]. Since the link sequence is relatively long, we set up a two-layer convolution structure with convolution steps of 1 and 2, and finally, each layer has 128-dimensional features.…”
Section: Modeling Schemementioning
confidence: 99%