Proceedings of the 45th International ACM SIGIR Conference on Research and Development in Information Retrieval 2022
DOI: 10.1145/3477495.3532060
|View full text |Cite
|
Sign up to set email alerts
|

Single-shot Embedding Dimension Search in Recommender System

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
4
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
6
2

Relationship

0
8

Authors

Journals

citations
Cited by 11 publications
(4 citation statements)
references
References 22 publications
0
4
0
Order By: Relevance
“…Model Compression [10,30,32,43,49,61] aims to obtain lightweight models by removing redundant weights. It becomes an effective solution to reduce model size in the fields of computer vision [8,35], graph learning [6,12,13,[36][37][38][39], and natural language processing [3][4][5].…”
Section: Related Workmentioning
confidence: 99%
“…Model Compression [10,30,32,43,49,61] aims to obtain lightweight models by removing redundant weights. It becomes an effective solution to reduce model size in the fields of computer vision [8,35], graph learning [6,12,13,[36][37][38][39], and natural language processing [3][4][5].…”
Section: Related Workmentioning
confidence: 99%
“…Following the previous works [17,25], we evaluate the performance of our method using two common metrics: AUC and Logloss. AUC refers to the area under the ROC curve, which means the probability that a model will rank a randomly selected positive instance higher than a randomly selected negative one.…”
Section: Evaluation Metricsmentioning
confidence: 99%
“…The embedding tables play a fundamental role in the recommendation system, as they dominate the majority of parameters. However, most existing methods construct their proposed recommender models with large-sized embedding tables and a uniform dimension size for all possible fields [8,10,29], which may lead to overfitting, high computational cost, and poor model generalization [15,25,33]. Therefore, the first objective for an optimal DLRM is to find optimal embedding dimensions for different fields and remove redundant dimensions.…”
Section: Introductionmentioning
confidence: 99%
“…Therefore, in order to make feature representations of different dimensions that can be used seamlessly in these base models, we use a generalization framework that contains a dimension alignment layer, as shown in Figure 9. This method is also used in [41,69]. Specifically, for each field, we initialize an alignment matrix.…”
Section: A the Generalization Framework Of Fdo Approachmentioning
confidence: 99%