Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence 2019
DOI: 10.24963/ijcai.2019/203
|View full text |Cite
|
Sign up to set email alerts
|

An Input-aware Factorization Machine for Sparse Prediction

Abstract: Factorization machines (FMs) are a class of general predictors working effectively with sparse data, which represents features using factorized parameters and weights. However, the accuracy of FMs can be adversely affected by the fixed representation trained for each feature, as the same feature is usually not equally predictive and useful in different instances. In fact, the inaccurate representation of features may even introduce noise and degrade the overall performance. In this work, we improve FMs by expl… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
22
0

Year Published

2020
2020
2022
2022

Publication Types

Select...
5
3
1

Relationship

1
8

Authors

Journals

citations
Cited by 34 publications
(22 citation statements)
references
References 13 publications
0
22
0
Order By: Relevance
“…Recently, input-dependent models have shown effectiveness in various domains, such as language modeling [27], [28] and computer vision [29], [30]. In recommendation, IFM [31] and DIFM [32] are presented to re-weight the representations of features and weights for different input instances before performing feature interactions. Inspired by these studies, we design a dynamic transformer encoder which performs an individual attention network on the self-attention layer and enables modeling user-specific intra-item patterns unveiled by the user intentions.…”
Section: Related Workmentioning
confidence: 99%
“…Recently, input-dependent models have shown effectiveness in various domains, such as language modeling [27], [28] and computer vision [29], [30]. In recommendation, IFM [31] and DIFM [32] are presented to re-weight the representations of features and weights for different input instances before performing feature interactions. Inspired by these studies, we design a dynamic transformer encoder which performs an individual attention network on the self-attention layer and enables modeling user-specific intra-item patterns unveiled by the user intentions.…”
Section: Related Workmentioning
confidence: 99%
“…Domain Adaptation (DA) aims to train a target-domain classifier with samples from source and target domains (Lu et al 2015). When the labels of samples in the target domain are unavailable, DA is known as unsupervised DA (UDA) (Zhong et al 2020;Fang et al 2020), which has been applied to address diverse real-world problems, such as computer version (Zhang et al 2020c;Dong et al 2019Dong et al , 2020b, natural language processing (Lee and Jha 2019; Guo, Pasunuru, and Bansal 2020), and recommender system (Zhang et al 2017;Yu, Wang, and Yuan 2019;Lu et al 2020) Significant theoretical advances have been achieved in UDA. Pioneering theoretical work was proposed by Ben-David et al (2007).…”
Section: Introductionmentioning
confidence: 99%
“…Facial expression is one of the most powerful, natural, and universal symbols for human beings to show their emotions [34]. With the popularization of mobile devices and social networks [79], more and more people prefer to record their daily activities [66], share their feelings, and express their opinions through images and videos, which generates a massive quantity of multimedia data [21,39,72]. The primary task of the current extensively-applied facial recognition system [68] is to identify and verify people.…”
Section: Introductionmentioning
confidence: 99%