Proceedings of the Eleventh ACM Conference on Recommender Systems 2017
DOI: 10.1145/3109859.3109892
|View full text |Cite
|
Sign up to set email alerts
|

An Elementary View on Factorization Machines

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2

Citation Types

0
1
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
3
2
1
1

Relationship

0
7

Authors

Journals

citations
Cited by 9 publications
(2 citation statements)
references
References 5 publications
0
1
0
Order By: Relevance
“…As shown in Figure 1 below, many scientists and researchers have made various improvements 2 , which can be roughly categorized into two kinds, namely feature width expansion as well as deep learning. Width expansion is mainly improved by introducing feature engineering at input, handling independent feature interactions [3][4][5] , handling correlated feature interactions 6,7 , and multi-model integration at output 8,9 . Deep learning, on the other hand, implements various deep feature mining for FM algorithms by initializing the embedding layer of the original features 10 , using a linear structure for the width part, a DNN model for the depth part [11][12][13][14] , and a Cross network for the width part to obtain low-order interaction information 15 , respectively.…”
Section: Introductionmentioning
confidence: 99%
“…As shown in Figure 1 below, many scientists and researchers have made various improvements 2 , which can be roughly categorized into two kinds, namely feature width expansion as well as deep learning. Width expansion is mainly improved by introducing feature engineering at input, handling independent feature interactions [3][4][5] , handling correlated feature interactions 6,7 , and multi-model integration at output 8,9 . Deep learning, on the other hand, implements various deep feature mining for FM algorithms by initializing the embedding layer of the original features 10 , using a linear structure for the width part, a DNN model for the depth part [11][12][13][14] , and a Cross network for the width part to obtain low-order interaction information 15 , respectively.…”
Section: Introductionmentioning
confidence: 99%
“…As the feature interactions are modelled by the latent vectors of original features, the latent vectors take the responsibility of both representation learning and feature interaction modelling. These two tasks may conflict with each other so that the model performance is bounded, as observed in [13], [14].…”
Section: Introductionmentioning
confidence: 99%