2020
DOI: 10.1016/j.knosys.2020.106170
|View full text |Cite
|
Sign up to set email alerts
|

State representation modeling for deep reinforcement learning based recommendation

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
58
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
4
2
2

Relationship

0
8

Authors

Journals

citations
Cited by 44 publications
(58 citation statements)
references
References 20 publications
0
58
0
Order By: Relevance
“…Existing studies usually directly use the embedding as the state representation. Liu et al [58,61] propose a supervised learning method to generate a better state representation by utilizing an attention mechanism and a pooling operation as shown in Figure 7. Such a representation method requires training a representation network when training the main policy network, which increases the model complexity.…”
Section: State Representationmentioning
confidence: 99%
See 1 more Smart Citation
“…Existing studies usually directly use the embedding as the state representation. Liu et al [58,61] propose a supervised learning method to generate a better state representation by utilizing an attention mechanism and a pooling operation as shown in Figure 7. Such a representation method requires training a representation network when training the main policy network, which increases the model complexity.…”
Section: State Representationmentioning
confidence: 99%
“…7. State representation used in works [58,61]. ℎ 𝑡 is the output of an attention layer that takes the representation of users' history at time 𝑡 as input and 𝑔(•) is the pooling operation.…”
Section: Attention Layermentioning
confidence: 99%
“…An interesting approach is to define states as the cluster resulted from the coclustering or biclustering of users and items [6] or to extend the state to include user demographics [5]. Efforts are as well invested in how to best represent the state in a RL RecSys [61]. Currently, and as to the knowledge of the author, in the current literature there is no approach where the recommendation context, user demographics, behavioral patterns and recent browsing/interaction history is taken into account in the state definition.…”
Section: F Recommender Systems Using Reinforcement Learningmentioning
confidence: 99%
“…Actions are mostly defined as selecting an item to be recommended from the whole discrete action space which contains the candidate items [12,14,15] or even whether to give a recommendation or not, and if yes, what would be the item to recommend [13]. There are authors that consider recommending a list of items [5,11,61]. One of the most different approaches it to recommend items from neighboring clusters to the user-items one [6].…”
Section: F Recommender Systems Using Reinforcement Learningmentioning
confidence: 99%
“…There are several difficulties in building RLRS, such as the state representation (Lei et al 2020;Liu et al 2020;Wang et al 2020b), the slated-based recommendation setting (Sunehag et al 2015;Swaminathan et al 2017;Chen et al 2019;Gong et al 2019;Ie et al 2019) and the largescale recommendation problem (Dulac-Arnold et al 2015;Zhao et al 2019). However, there has been limited attention paid to the negative impact of offline setting, i.e., training RLRS agents from static datasets.…”
Section: Related Workmentioning
confidence: 99%