2022
DOI: 10.1002/int.22827
|View full text |Cite
|
Sign up to set email alerts
|

Exploring lottery ticket hypothesis in media recommender systems

Abstract: Media recommender systems aim to capture users’ preferences and provide precise personalized recommendation of media content. There are two critical components in the common paradigm of modern recommender models: (1) representation learning, which generates an embedding for each user and item; and (2) interaction modeling, which fits user preferences toward items based on their representations. In spite of great success, when a great amount of users and items exist, it usually needs to create, store, and optim… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
10
0

Year Published

2024
2024
2024
2024

Publication Types

Select...
3
2

Relationship

1
4

Authors

Journals

citations
Cited by 5 publications
(10 citation statements)
references
References 23 publications
0
10
0
Order By: Relevance
“…However, since the amount of information carried by diverse users and items is very likely to be different, the strategy of assigning the same embedding size may be suboptimal. Observations from prior studies such as ESAPN [26], UnKD [2], and LTH-MRS [46], suggest that inactive users and unpopular items may contain less information, necessitating smaller embedding sizes to depict their simpler characteristics. Hence, we relax the constraint on embedding size and encourage the model to dynamically discover those redundant weights during learning.…”
Section: Methodology 31 Dynamic Sparse Learningmentioning
confidence: 99%
See 4 more Smart Citations
“…However, since the amount of information carried by diverse users and items is very likely to be different, the strategy of assigning the same embedding size may be suboptimal. Observations from prior studies such as ESAPN [26], UnKD [2], and LTH-MRS [46], suggest that inactive users and unpopular items may contain less information, necessitating smaller embedding sizes to depict their simpler characteristics. Hence, we relax the constraint on embedding size and encourage the model to dynamically discover those redundant weights during learning.…”
Section: Methodology 31 Dynamic Sparse Learningmentioning
confidence: 99%
“…• MP-based methods: PEP [27], LTH-MRS [46], Random Pruning (RP) [62], One-shot Magnitude Pruning (OMP) [16], Without Rewinding (WR) [34].…”
Section: Baselinesmentioning
confidence: 99%
See 3 more Smart Citations