2022
DOI: 10.48550/arxiv.2201.05333
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Attention over Self-attention:Intention-aware Re-ranking with Dynamic Transformer Encoders for Recommendation

Abstract: Re-ranking models refine the item recommendation list generated by the prior global ranking model with intra-item relationships. However, most existing re-ranking solutions refine recommendation list based on the implicit feedback with a shared re-ranking model, which regrettably ignore the intra-item relationships under diverse user intentions. In this paper, we propose a novel Intention-aware Re-ranking Model with Dynamic Transformer Encoder (RAISE), aiming to perform user-specific prediction for each target… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
5
0

Year Published

2022
2022
2022
2022

Publication Types

Select...
2

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(5 citation statements)
references
References 26 publications
0
5
0
Order By: Relevance
“…The final prediction is generated by capturing the interactions between candidate items and users' multiple behaviors. A more recent work Raiss [Lin et al, 2022] attempts to improve personalization in re-ranking by maintaining individual attention weights in modeling cross-item interactions for each user. Other network structures.…”
Section: Learning By Observed Signalsmentioning
confidence: 99%
See 2 more Smart Citations
“…The final prediction is generated by capturing the interactions between candidate items and users' multiple behaviors. A more recent work Raiss [Lin et al, 2022] attempts to improve personalization in re-ranking by maintaining individual attention weights in modeling cross-item interactions for each user. Other network structures.…”
Section: Learning By Observed Signalsmentioning
confidence: 99%
“…The network parameters are shared across users. While the latter maintains an individual set of parameters for each user as in Raiss [Lin et al, 2022] or IRGPR [Liu et al, 2020b]. Complexity.…”
Section: Qualitative Model Comparisonmentioning
confidence: 99%
See 1 more Smart Citation
“…The final prediction is generated by capturing the interactions between candidate items and users' multiple behaviors. Raiss [Lin et al, 2022] attempts to improve personalization in re-ranking by maintaining individual attention weights in modeling cross-item interactions for each user. A more recent work PEAR proposes to model cross-item interactions between both the initial list and the users' historical clicked items by a designed cross-attention structure.…”
Section: Learning By Observed Signalsmentioning
confidence: 99%
“…The network parameters are shared across users. While the latter maintains an individual set of parameters for each user as in Raiss [Lin et al, 2022] or IRGPR [Liu et al, 2020b].…”
Section: Qualitative Model Comparisonmentioning
confidence: 99%