2021
DOI: 10.48550/arxiv.2111.04983
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Dynamic Parameterized Network for CTR Prediction

Abstract: Learning to capture feature relations effectively and efficiently is essential in clickthrough rate (CTR) prediction of modern recommendation systems. Most existing CTR prediction methods model such relations either through tedious manuallydesigned low-order interactions or through inflexible and inefficient high-order interactions, which both require extra DNN modules for implicit interaction modeling. In this paper, we proposed a novel plug-in operation, Dynamic Parameterized Operation (DPO), to learn both e… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
5
0

Year Published

2024
2024
2024
2024

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(5 citation statements)
references
References 19 publications
(52 reference statements)
0
5
0
Order By: Relevance
“…Sharpness-aware minimization (SAM) (Foret et al 2020) However, the large computation complexity hinder their deployment on real-world application. CR (Zhu et al 2023) considers the difference between offline retraining and online serving and then develops the practical surrogate losses for maximizing the offline model performance towards online deployment. However, none of these works have addressed the problem of maximizing future performance.…”
Section: Related Workmentioning
confidence: 99%
See 4 more Smart Citations
“…Sharpness-aware minimization (SAM) (Foret et al 2020) However, the large computation complexity hinder their deployment on real-world application. CR (Zhu et al 2023) considers the difference between offline retraining and online serving and then develops the practical surrogate losses for maximizing the offline model performance towards online deployment. However, none of these works have addressed the problem of maximizing future performance.…”
Section: Related Workmentioning
confidence: 99%
“…Benefited from mitigation on distributional drift, the strategy of online fine-tuning with recent data can efficiently improve the generalization on the next serving stage (Zhu et al 2023;Rendle and Schmidt-Thieme 2008). In real-world setting, we only care about the future performance on test data D new .…”
Section: Proposed Framework Preliminaries and Problem Formulationmentioning
confidence: 99%
See 3 more Smart Citations