2017
DOI: 10.48550/arxiv.1703.04247
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

DeepFM: A Factorization-Machine based Neural Network for CTR Prediction

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
445
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
3
3
2

Relationship

0
8

Authors

Journals

citations
Cited by 344 publications
(445 citation statements)
references
References 14 publications
0
445
0
Order By: Relevance
“…For offline evaluation, we compare our method with four categories of models: Feature Interaction (FI) models,User Interests Modeling (UIM) models, Graph Neural Networks (GNN) models and Transformer-based models.FI models include DeepFM [8], xDeepFM [17], UIM models include DIN [36], DIEN [35] and DMR [19]. GNN models include GCN [16], GAT [26], Graph-SAGE [9] and HGNN models includes RGCN [23], HAN [29], HGT [12],NIRec [14].…”
Section: Competitors and Metricsmentioning
confidence: 99%
See 1 more Smart Citation
“…For offline evaluation, we compare our method with four categories of models: Feature Interaction (FI) models,User Interests Modeling (UIM) models, Graph Neural Networks (GNN) models and Transformer-based models.FI models include DeepFM [8], xDeepFM [17], UIM models include DIN [36], DIEN [35] and DMR [19]. GNN models include GCN [16], GAT [26], Graph-SAGE [9] and HGNN models includes RGCN [23], HAN [29], HGT [12],NIRec [14].…”
Section: Competitors and Metricsmentioning
confidence: 99%
“…To discover the potential click-through relation between user and item, the most popular learning paradigm is to firstly use an embedding layer to transfer the sparse user/item features to a low-dimensional dense embedding, and then to construct the feature fusion & learning models to encode the user preferences, item characteristic or their interactions. Typical models include Wide&Deep [2], DeepFM [8], xDeepFM [17], AFM [31], DeepMCP [20] and so on. However, this learning paradigm treats the sparse categorical feature equally and ignores the intrinsic structures among them, e.g., the sequential order of historical behaviors.…”
mentioning
confidence: 99%
“…The second group serves as baselines without cold-start alleviating component as well as backbones to test the generalization and adaptability of our proposed VELF. (1) DeepFM [7] is a deep recommendation method that learns both low-and high-level interactions between fields. (2) Wide&Deep [3] develop wide linear models and deep neural networks together to enhance their respective abilities.…”
Section: Baselinesmentioning
confidence: 99%
“…We compare our VELF with the SOTA methods to alleviate cold-start problem from the perspective of embedding learning, i.e., Dropout-Net [28] and MWUF [34]. Comparisons with state-of-the-arts are conducted on DeepFM [7] which is one of the most popular model structures used in industry. For more detailed and directed analysis, we also report the results of the second group baseline models mentioned before.…”
Section: Comparison With State-of-the-arts (Rq1)mentioning
confidence: 99%
“…Although advertisers play an important role in this ecosystem, much less attention has been paid to understanding advertisers either from academic or industrial communities. Existing studies mainly focus on the user side [2,7,17,27,28,40,41], while some [8] [37] noticed the necessity of advertiser understanding for platforms' long-term development, but they focused on predicting one single task like churn rate. As different advertisers at different business cycles have various demands as well as advertising performance, for example, impressions or clicks of advertised products, ROI (i.e return on investment), expenditure constraints, active or churn rate, etc, it is insufficient to measure overall conditions of advertisers based on one single task.…”
Section: Introductionmentioning
confidence: 99%