2012
DOI: 10.1007/s11257-011-9117-5
|View full text |Cite
|
Sign up to set email alerts
|

Evaluating the effectiveness of explanations for recommender systems

Abstract: When recommender systems present items, these can be accompanied by explanatory information. Such explanations can serve seven aims: effectiveness, satisfaction, transparency, scrutability, trust, persuasiveness, and efficiency. These aims can be incompatible, so any evaluation needs to state which aim is being investigated and use appropriate metrics. This paper focuses particularly on effectiveness (helping users to make good decisions) and its trade-off with satisfaction. It provides an overview of existing… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

1
148
1
6

Year Published

2012
2012
2023
2023

Publication Types

Select...
6
3

Relationship

0
9

Authors

Journals

citations
Cited by 264 publications
(156 citation statements)
references
References 40 publications
1
148
1
6
Order By: Relevance
“…Increased transparency has been associated with many benefits, including increased user acceptance of (or satisfaction with) recommendations and predictions [4,8,39], improved training of intelligent agents [38], and increased trust in the system's predictions [10]. In this paper, however, we primarily care about transparency because it has been shown to help users understand how a learning system operates [23,28].…”
Section: Related Workmentioning
confidence: 99%
“…Increased transparency has been associated with many benefits, including increased user acceptance of (or satisfaction with) recommendations and predictions [4,8,39], improved training of intelligent agents [38], and increased trust in the system's predictions [10]. In this paper, however, we primarily care about transparency because it has been shown to help users understand how a learning system operates [23,28].…”
Section: Related Workmentioning
confidence: 99%
“…The reasoning and insight into the recommendation process exposed by an explanation interface can also increase the inspectability of the system as a whole. Tintarev and Masthoff [46] show that explanations make it easier to judge the quality of recommendations. Consequently, such explanations increase users' trust in the recommendations and, in turn, the perceived competence of the system ( [8,11], see also [48]).…”
Section: Inspectabilitymentioning
confidence: 99%
“…Users seem to appreciate it when recommender systems explain their recommendations [8,11,19,45,46,48]. In social recommenders, where users know the people on which the recommendations are based, the system can provide such explanation by showing how the overlap between one's preferences and those of one's friends resulted in a set of recommendations.…”
Section: Introductionmentioning
confidence: 99%
“…Hook et al (1996 discussed ways to form explanations in adaptive systems. Tintarev and Masthoff (2012) provide a more detailed review of research on explanations in recommender systems. We share their conclusion that additional work is needed in the field, particularly to explore the extent to which the recommendations are actually improving user decision-making, to explore the costs and benefits of scrutability in improving recommendations, and to understand the effects of explanations in the context of live, deployed systems.…”
Section: Explanations and Transparencymentioning
confidence: 99%