Abstract:Abstract. Explaining recommendations helps users to make better, more satisfying decisions. We describe a novel approach to explanation for recommender systems, one that drives the recommendation process, while at the same time providing the user with useful insights into the reason why items have been chosen and the trade-offs they may need to consider when making their choice. We describe this approach in the context of a case-based recommender system that harnesses opinions mined from user-generated reviews… Show more
“…As discussed, some explanations provide information associated with background knowledge, but such knowledge is almost always associated with the decision inference process. Four approaches [254,46,156,44] in the e-commerce domain exploited external sources of information, namely product reviews.…”
Section: External Sources Of Explanation Contentmentioning
With the recent advances in the field of artificial intelligence, an increasing number of decision-making tasks are delegated to software systems.A key requirement for the success and adoption of such systems is that users must trust system choices or even fully automated decisions. To achieve this, explanation facilities have been widely investigated as a means of establishing trust in these systems since the early years of expert systems. With today's increasingly sophisticated machine learning algorithms, new challenges in the context of explanations, accountability, and trust towards such systems constantly arise. In this work, we systematically review the literature on explanations in advice-giving systems. This is a family of systems that includes recommender systems, which is one of the most successful classes of advicegiving software in practice. We investigate the purposes of explanations as well as how they are generated, presented to users, and evaluated. As a result, we derive a novel comprehensive taxonomy of aspects to be considered when designing explanation facilities for current and future decision support systems. The taxonomy includes a variety of different facets, such as explanation objective, responsiveness, content and presentation. Moreover, we identified several challenges that remain unaddressed so far, for example related to fine-grained issues associated with the presentation of explanations and how explanation facilities are evaluated.
“…As discussed, some explanations provide information associated with background knowledge, but such knowledge is almost always associated with the decision inference process. Four approaches [254,46,156,44] in the e-commerce domain exploited external sources of information, namely product reviews.…”
Section: External Sources Of Explanation Contentmentioning
With the recent advances in the field of artificial intelligence, an increasing number of decision-making tasks are delegated to software systems.A key requirement for the success and adoption of such systems is that users must trust system choices or even fully automated decisions. To achieve this, explanation facilities have been widely investigated as a means of establishing trust in these systems since the early years of expert systems. With today's increasingly sophisticated machine learning algorithms, new challenges in the context of explanations, accountability, and trust towards such systems constantly arise. In this work, we systematically review the literature on explanations in advice-giving systems. This is a family of systems that includes recommender systems, which is one of the most successful classes of advicegiving software in practice. We investigate the purposes of explanations as well as how they are generated, presented to users, and evaluated. As a result, we derive a novel comprehensive taxonomy of aspects to be considered when designing explanation facilities for current and future decision support systems. The taxonomy includes a variety of different facets, such as explanation objective, responsiveness, content and presentation. Moreover, we identified several challenges that remain unaddressed so far, for example related to fine-grained issues associated with the presentation of explanations and how explanation facilities are evaluated.
“…There are a number of different approaches and studies that deal with recommendations and which bring new methods in the area of personalization models or explanations [16,24]. But especially explanations are still quite new areas of actual research.…”
Nowadays, personalized recommendations are widely used and popular. There are a lot of systems in various fields, which use recommendations for different purposes. One of the basic problems is the distrust of users of recommended systems. Users often consider the recommendations as an intrusion of their privacy. Therefore, it is important to make recommendations transparent and understandable to users. To address these problems, we propose a novel hybrid method of personalized explanation of recommendations. Our method is independent of recommendation technique and combines basic explanation styles to provide the appropriate type of personalized explanation to each user. We conducted several online experiments in the news domain. Obtained results clearly show that the proposed personalized hybrid explanation approach improves the users' attitude towards the recommender, moreover, we have observed the increase of recommendation precision.
“…Another focus is identifying the sets of aspects with higher positive/negative polarity to give insights into the reason why items have been chosen [8]. Those approaches need previously to group the aspects to reduce the granularity in order to provide useful recommendations, often solved by clustering aspects using background knowledge to simplify the process.…”
Abstract. In this paper we focus on a particular interesting web usergenerated content: people's experiences. We extend our previous work on aspect extraction and sentiment analysis and propose a novel approach to create a vocabulary of basic level concepts with the appropriate granularity to characterize a set of products. This concept vocabulary is created by analyzing the usage of the aspects over a set of reviews, and allows us to find those features with a clear positive and negative polarity to create the bundles of arguments. The argument bundles allow us to define a concept-wise satisfaction degree of a user query over a set of bundles using the notion of fuzzy implication, allowing the reuse experiences of other people to the needs a specific user.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.