2019
DOI: 10.1007/978-3-030-30793-6_3
|View full text |Cite
|
Sign up to set email alerts
|

How to Make Latent Factors Interpretable by Feeding Factorization Machines with Knowledge Graphs

Abstract: Model-based approaches to recommendation can recommend items with a very high level of accuracy. Unfortunately, even when the model embeds content-based information, if we move to a latent space we miss references to the actual semantics of recommended items. Consequently, this makes non-trivial the interpretation of a recommendation process. In this paper, we show how to initialize latent factors in Factorization Machines by using semantic features coming from a knowledge graph in order to train an interpreta… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
4
0

Year Published

2020
2020
2022
2022

Publication Types

Select...
4
3
2

Relationship

1
8

Authors

Journals

citations
Cited by 24 publications
(4 citation statements)
references
References 45 publications
0
4
0
Order By: Relevance
“…Built on this strategy, various models were proposed. For instance, by means of a textual knowledge graph, kaHFM [369] can learn item's embeddings of semantic features as a supplement to factorization machine (FM) [153]. After extracting the structural, textual and visual features of items from three different knowledge graphs, CKE [370] can construct a synthetic embedding for each item as a supplement.…”
Section: B Recommendation Involving Knowledgementioning
confidence: 99%
“…Built on this strategy, various models were proposed. For instance, by means of a textual knowledge graph, kaHFM [369] can learn item's embeddings of semantic features as a supplement to factorization machine (FM) [153]. After extracting the structural, textual and visual features of items from three different knowledge graphs, CKE [370] can construct a synthetic embedding for each item as a supplement.…”
Section: B Recommendation Involving Knowledgementioning
confidence: 99%
“…As the study focused on selection and embedding of semantic features, Noia et al showed how ontology-based (linked) data summarization can drive the selection of properties/features useful to a recommender system [18]. Anelli et al showed how to initialize latent factors in Factorization Machines by using semantic features coming from a knowledge graph in order to train an interpretable model [19]. These studies did not directly focus on knowledge expansion or recommendation algorithms fully utilizing knowledge graphs, and knowledge graphs were used as a source of additional information or only some features.…”
Section: Recommendation Algorithmmentioning
confidence: 99%
“…Based on this strategy, a lot of works were carried out. For instance, kaHFM [368] learns item's embeddings of semantic features from a textual knowledge graph as a supplement to the factorization machine (FM) [153]. By extracting the structure, textual, and visual features of items from three different knowledge graphs, CKE [369] constructs a synthetic embedding for each item as supplementary information.…”
Section: Recommendation Involving Knowledgementioning
confidence: 99%