Proceedings of the 44th International ACM SIGIR Conference on Research and Development in Information Retrieval 2021
DOI: 10.1145/3404835.3463015
|View full text |Cite
|
Sign up to set email alerts
|

Explicit Semantic Cross Feature Learning via Pre-trained Graph Neural Networks for CTR Prediction

Abstract: Cross features play an important role in click-through rate (CTR) prediction. Most of the existing methods adopt a DNN-based model to capture the cross features in an implicit manner. These implicit methods may lead to a sub-optimized performance due to the limitation in explicit semantic modeling. Although traditional statistical explicit semantic cross features can address the problem in these implicit methods, such features still suffer from some challenges, including lack of generalization and expensive me… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
28
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
2
1

Relationship

0
8

Authors

Journals

citations
Cited by 22 publications
(28 citation statements)
references
References 16 publications
0
28
0
Order By: Relevance
“…Due to their strength in graph representations, GNNs have been used to alleviate the feature sparsity and behavior sparsity problems in CTR prediction, by converting feature interactions into node interactions in a graph structure. Li et al (2019b) proposed a feature interaction GNNs (Fi-GNN) where field-aware feature interactions were realized through assigning two interaction matrices to each node in a complete graph; in a later work, Li et al (2021a) used the pre-trained GNNs to generate explicit semantic-cross features and applied the weighted square loss to compute the importance of these features.…”
Section: Gnns In Ctr Predictionmentioning
confidence: 99%
See 1 more Smart Citation
“…Due to their strength in graph representations, GNNs have been used to alleviate the feature sparsity and behavior sparsity problems in CTR prediction, by converting feature interactions into node interactions in a graph structure. Li et al (2019b) proposed a feature interaction GNNs (Fi-GNN) where field-aware feature interactions were realized through assigning two interaction matrices to each node in a complete graph; in a later work, Li et al (2021a) used the pre-trained GNNs to generate explicit semantic-cross features and applied the weighted square loss to compute the importance of these features.…”
Section: Gnns In Ctr Predictionmentioning
confidence: 99%
“…Recently, quite a few studies have been reported to establish CTR prediction models in the GNNs framework, by modeling high-order interactions in feature graph (Li et al, 2019b;Li et al, 2021a;Li et al, 2021b), designing graph intention networks (Li et al, 2019a) and dynamic sequential graph (Chu et al, 2021) to enrich users' behaviors, constructing attribution graph and collaborative graph to address the sparsity issue (Guo et al, 2021). However, most of existing GNNs-based models handle feature interactions by aggregating information from all neighbors equally in a complete graph (Tao et al, 2020).…”
Section: Introductionmentioning
confidence: 99%
“…However, the message passing phase of both GCN and GIN (see Table 5) aggregates features from neighbors before computing any feature combinations, thus ignoring the possible interactions between features of grain pairs. In literature, different approaches to learning sophisticated feature interaction across adjacent nodes have been proposed [26][27][28] , which might be interesting to model microstructural deformation. The GCN performing slightly better than the GIN, see Table 1 might also be linked to the aggregation of messages across neighbor grains.…”
Section: /16mentioning
confidence: 99%
“…Generally, existing designed supervised signals can be classified into two main categories. The first is called node-level tasks, which aims at predicting localized properties utilizing node representations, such as graph structure reconstruction [3,6,7,15,11], localized attribute prediction [6,7] and node representation recovery [4]. Another is called graph-level tasks, which defines globalized optimization goal for the entire graph, such as graph property prediction [6,12] and mutual information maximization [21,28,17,15,27].…”
Section: Pre-training Graph Neural Networkmentioning
confidence: 99%