Proceedings of the 3rd Workshop on Deep Learning for Recommender Systems 2018
DOI: 10.1145/3270323.3270327
|View full text |Cite
|
Sign up to set email alerts
|

Knowledge-aware Autoencoders for Explainable Recommender Systems

Abstract: Recommender Systems have been widely used to help users in finding what they are looking for thus tackling the information overload problem. After several years of research and industrial findings looking after better algorithms to improve accuracy and diversity metrics, explanation services for recommendation are gaining momentum as a tool to provide a human-understandable feedback to results computed, in most of the cases, by black-box machine learning techniques. As a matter of fact, explanations may guaran… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
27
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
5
2
1

Relationship

1
7

Authors

Journals

citations
Cited by 26 publications
(27 citation statements)
references
References 19 publications
0
27
0
Order By: Relevance
“…Neural-symbolic Systems [297,298,299,300] KB-enhanced Systems [24,169,301,308,309,310] Deep Formulation [264,302,303,304,305] Relational Reasoning [75,312,313,314] Case-base Reasoning [316,317,318] Fig. 11.a) ( Fig.…”
Section: Hybrid Transparent and Black-box Methodsmentioning
confidence: 99%
“…Neural-symbolic Systems [297,298,299,300] KB-enhanced Systems [24,169,301,308,309,310] Deep Formulation [264,302,303,304,305] Relational Reasoning [75,312,313,314] Case-base Reasoning [316,317,318] Fig. 11.a) ( Fig.…”
Section: Hybrid Transparent and Black-box Methodsmentioning
confidence: 99%
“…We want to stress here that, although each autoencoder is trained over a not huge number of samples, in [9] we prove that recommendation results have very good performance in terms of accuracy and diversity also compared to state-of-theart algorithms. 8 To train such kinds of autoencoder, we inhibit the feedforward and backpropagation step for those neurons which result to be not connected in the KG by using a masking multiplier matrix M where rows and columns represent respectively items and features.…”
Section: Semantics-aware Autoencodermentioning
confidence: 99%
“…The strength of SemAuto is its explainability since the model, as previously said, is interpretable. To validate the explanation we are able to furnish users, we set up an online experiment leveraging on an A/B test platform we built ad-hoc [8]; thanks to 892 volunteers, we evaluated the effectiveness of our approach and compared its results to two baselines. Hence, we primarily focus on the following research questions:…”
Section: Explanationmentioning
confidence: 99%
“…producing transparent recommendations and explanations. An example of their explanation style is shown inFigure 2.9.The study of[8] focuses on the issue of explaining the output of a black box recommender system. In this work, the recommender system is built using Autoencoder Neural Network technique that is also aware of the Knowledge Graphs retrieved from the Semantic Web.…”
mentioning
confidence: 99%
“…8 shows three graphs of the MEP, MER, and xF-score performance of all models when these metrics use the neighborhood explainability graph in the book domain. PMF was the winner, followed by AMF, while the performance of all other models was low.…”
mentioning
confidence: 99%