2015 IEEE International Conference on Big Data (Big Data) 2015
DOI: 10.1109/bigdata.2015.7364112
|View full text |Cite
|
Sign up to set email alerts
|

Factorization machines with follow-the-regularized-leader for CTR prediction in display advertising

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
24
0

Year Published

2016
2016
2023
2023

Publication Types

Select...
3
3
1

Relationship

0
7

Authors

Journals

citations
Cited by 42 publications
(24 citation statements)
references
References 4 publications
0
24
0
Order By: Relevance
“…We take CTR estimation in online advertising as the working example to explore the learning ability of our PNN model. The extensive experimental results on two large-scale realworld datasets demonstrate the consistent superiority of our model over state-of-the-art user response prediction models [6], [13], [12] on various metrics.…”
Section: Introductionmentioning
confidence: 64%
See 2 more Smart Citations
“…We take CTR estimation in online advertising as the working example to explore the learning ability of our PNN model. The extensive experimental results on two large-scale realworld datasets demonstrate the consistent superiority of our model over state-of-the-art user response prediction models [6], [13], [12] on various metrics.…”
Section: Introductionmentioning
confidence: 64%
“…The data collection in these IR tasks is mostly in a multifield categorical form, for example, [Weekday=Tuesday, Gender=Male, City=London], which is normally transformed into high-dimensional sparse binary features via onehot encoding [4]. For example, the three field vectors with one-hot encoding are concatenated as Many machine learning models, including linear logistic regression [5], non-linear gradient boosting decision trees [4] and factorization machines [6], have been proposed to work on such high-dimensional sparse binary features and produce high quality user response predictions. However, these models highly depend on feature engineering in order to capture highorder latent patterns [7].…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…where to simplify our notation, we substitute f θ (x) with its predicted CTR variable r. (24) where µ is the Lagrangian multiplier. Taking the derivative equal to zero, we get that To solve µ, we take the Lagrangian derivative w.r.t.…”
Section: Joint Optimization With Bidding Functionmentioning
confidence: 99%
“…From the methodology view, linear models such as logistic regression [14] and non-linear models such as tree-based model [10] and factorization machines [19,21] are commonly used. Other variants include Bayesian probit regression [9], FTRFL [24] in factorization machine, and convolutional neural network learning framework [17]. Normally, area under ROC curve (AUC) and relative information gain (RIG) are common evaluation metrics for CTR prediction accuracy [9].…”
Section: Related Workmentioning
confidence: 99%