2020
DOI: 10.1561/1500000066
|View full text |Cite
|
Sign up to set email alerts
|

Explainable Recommendation: A Survey and New Perspectives

Abstract: Explainable recommendation attempts to develop models that generate not only high-quality recommendations but also intuitive explanations. The explanations may either be post-hoc or directly come from an explainable model (also called interpretable or transparent model in some contexts). Explainable recommendation tries to address the problem of why: by providing explanations to users or system designers, it helps humans to understand why certain items are recommended by the algorithm, where the human can eith… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

1
280
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
5
5

Relationship

0
10

Authors

Journals

citations
Cited by 529 publications
(281 citation statements)
references
References 159 publications
(253 reference statements)
1
280
0
Order By: Relevance
“…Explainable Recommendation. Explainable recommendation has also attracted a lot of attention in recent years [43]. Early approaches attempt to use topic models to generate intuitive explanations for recommendation results, e.g., [20,23,34].…”
Section: Related Workmentioning
confidence: 99%
“…Explainable Recommendation. Explainable recommendation has also attracted a lot of attention in recent years [43]. Early approaches attempt to use topic models to generate intuitive explanations for recommendation results, e.g., [20,23,34].…”
Section: Related Workmentioning
confidence: 99%
“…In the field of RS, there have been various studies (Tintarev & Masthoff, 2007;Zhang & Chen, 2018) showing that equipping recommendations with personalized explanations helps to improve the system's persuasiveness and the users trust in it. In recent years, collaborative filtering methods, in particular those utilizing deep learning models (Hartford et al, 2018;Berg et al, 2017;Zheng et al, 2016), have improved recommendation performances, yet, the black box nature of deep learning models and the latent space used in collaborative filtering, make these models hard to explain.…”
Section: Related Workmentioning
confidence: 99%
“…• Lack of explainability: In addition the score it gives to products, Yuka also recommends an alternative product in the case a given product has fared below an overall score of 50. However, despite the growing interest in eXplainable AI and its impact on enhancing user trust and acceptability [5], as well as explainable and fair recommender systems [28,49], Yuka does not explain why a particular product is chosen as an alternative. Conspicuously, the recommended alternatives are not ranked by their score but other factors such as 'product availability' (which is not a real-time assessment based on, for instance, location or real supermarket supplies) but another constant assigned to a product.…”
Section: Case-study: Evaluating Yukamentioning
confidence: 99%