Proceedings of the 1st Workshop on Deep Learning for Recommender Systems 2016
DOI: 10.1145/2988450.2988454
|View full text |Cite
|
Sign up to set email alerts
|

Wide & Deep Learning for Recommender Systems

Abstract: Generalized linear models with nonlinear feature transformations are widely used for large-scale regression and classification problems with sparse inputs. Memorization of feature interactions through a wide set of cross-product feature transformations are effective and interpretable, while generalization requires more feature engineering effort. With less feature engineering, deep neural networks can generalize better to unseen feature combinations through low-dimensional dense embeddings learned for the spar… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

2
1,226
0
2

Year Published

2017
2017
2024
2024

Publication Types

Select...
7
2

Relationship

0
9

Authors

Journals

citations
Cited by 2,811 publications
(1,374 citation statements)
references
References 7 publications
2
1,226
0
2
Order By: Relevance
“…The deep network is composed of two primary parts, and while not exactly the same, is inspired by recent research into Wide-and-Deep models where sparse linear inputs are combined with deep learning to provide a more accurate model (Cheng et al 2016). The analogous 'wide' input is a one-hot encoded input of the CPC codes for a patent and a one-hot encoded input of reference publication numbers.…”
Section: Machine Learning Methodologiesmentioning
confidence: 99%
“…The deep network is composed of two primary parts, and while not exactly the same, is inspired by recent research into Wide-and-Deep models where sparse linear inputs are combined with deep learning to provide a more accurate model (Cheng et al 2016). The analogous 'wide' input is a one-hot encoded input of the CPC codes for a patent and a one-hot encoded input of reference publication numbers.…”
Section: Machine Learning Methodologiesmentioning
confidence: 99%
“…Machine learning methods are often trained in so-called batch modes. Nevertheless, many applications in the field of autonomous robotics or driving are trained on the basis of continuously arriving training data [15]. Thus, incremental learning facilitates learning from streaming data and hence is exposed to continuous model adaptation [15].…”
Section: Related Workmentioning
confidence: 99%
“…Nevertheless, many applications in the field of autonomous robotics or driving are trained on the basis of continuously arriving training data [15]. Thus, incremental learning facilitates learning from streaming data and hence is exposed to continuous model adaptation [15]. Especially handling non-stationary data assumes key importance in applications like voice and face recognition due to dynamically evolving patterns.…”
Section: Related Workmentioning
confidence: 99%
“…This model is used for CTR prediction and tested commercial data. This system takes the advantage of the Wide & Deep model proposed by Google [29] which joint trained wide linear models and deep neural networks to overcome the sparsity of user-item interactions matrix.…”
Section: Deep Collaborative Filtering Recommendationmentioning
confidence: 99%