Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining 2017
DOI: 10.1145/3097983.3098008
|View full text |Cite
|
Sign up to set email alerts
|

Discrete Content-aware Matrix Factorization

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

2
43
0

Year Published

2018
2018
2023
2023

Publication Types

Select...
3
2
1

Relationship

1
5

Authors

Journals

citations
Cited by 68 publications
(47 citation statements)
references
References 21 publications
2
43
0
Order By: Relevance
“…This is consistent with the findings in [Liu et al, 2014] that the direct discrete optimization is stronger than two-stage approaches and that side information makes the user codes and item codes more representative, which can boost the performance of recommendation. However, the rather small performance gap between DCF and DCMF indicates that DCMF fails to make full use of the side information.…”
Section: Performance Comparison (Rq1)supporting
confidence: 91%
See 4 more Smart Citations
“…This is consistent with the findings in [Liu et al, 2014] that the direct discrete optimization is stronger than two-stage approaches and that side information makes the user codes and item codes more representative, which can boost the performance of recommendation. However, the rather small performance gap between DCF and DCMF indicates that DCMF fails to make full use of the side information.…”
Section: Performance Comparison (Rq1)supporting
confidence: 91%
“…Similar to the idea of projection, [Zhou and Zha, 2012] generate binary code from rotated continuous user-item latent factors by running ITQ [Gong and Lazebnik, 2011]. In order to derive more compact binary codes, [Liu et al, 2014] imposed the de-correlation constraint of different binary codes on continuous user-item latent factors and then rounded them to produce binary codes. However, [Zhang et al, 2014] argued that hashing only preserves similarity between user and item rather than inner product based preference, so subsequent hashing may harm the accuracy of preference predictions, thus they imposed a Constant Feature Norm(CFN) constraint on user-item continuous latent factors, and then quantized similarities by respectively thresholding their magnitudes and phases.…”
Section: Efficient Recommendationmentioning
confidence: 99%
See 3 more Smart Citations