Recommender Systems have been widely used to help users in finding what they are looking for thus tackling the information overload problem. After several years of research and industrial findings looking after better algorithms to improve accuracy and diversity metrics, explanation services for recommendation are gaining momentum as a tool to provide a human-understandable feedback to results computed, in most of the cases, by black-box machine learning techniques. As a matter of fact, explanations may guarantee users satisfaction, trust, and loyalty in a system. In this paper, we evaluate how different information encoded in a Knowledge Graph are perceived by users when they are adopted to show them an explanation. More precisely, we compare how the use of categorical information, factual one or a mixture of them both in building explanations, affect explanatory criteria for a recommender system. Experimental results are validated through an A/B testing platform which uses a recommendation engine based on a Semantics-Aware Autoencoder to build users profiles which are in turn exploited to compute recommendation lists and to provide an explanation.
The growth of domain-specific applications of semantic models, boosted by the recent achievements of unsupervised embedding learning algorithms, demands domain-specific evaluation datasets. In many cases, contentbased recommenders being a prime example, these models are required to rank words or texts according to their semantic relatedness to a given concept, with particular focus on top ranks. In this work, we give a threefold contribution to address these requirements: (i) we define a protocol for the construction, based on adaptive pairwise comparisons, of a relatedness-based evaluation dataset tailored on the available resources and optimized to be particularly accurate in top-rank evaluation; (ii) we define appropriate metrics, extensions of well-known ranking correlation coefficients, to evaluate a semantic model via the aforementioned dataset by taking into account the greater significance of top ranks. Finally, (iii) we define a stochastic transitivity model to simulate semantic-driven pairwise comparisons, which confirms the effectiveness of the proposed dataset construction protocol.
Recommender Systems are widely adopted in nowadays services such as e-commerce websites, multimedia streaming platforms, and many others. They help users to find what they are looking for by suggesting relevant items leveraging their past preferences. Deep Learning models are very effective in solving the recommendation problem; as a matter of fact, many deep learning architectures have been proposed over the years. Even if deep learning models outperform many state-of-the-art algorithms, the worst disadvantage is about their interpretability: explaining the reason a specific item has been recommended to a user is quite a difficult task since the model is not interpretable. Accuracy in the recommendation is no more enough since users are also expecting a useful explanation for the suggested items. Users, on the other hand, want to know why. In this paper, we present SemAuto, a novel approach based on an Autoencoder Neural Network that makes it possible to semantically label neurons in hidden layers, thus paving the way to the model's interpretability and consequently to the explanation of a recommendation. We tested our semanticsaware approach with respect to other state-of-the-art algorithms to prove the recommendation's accuracy. Furthermore, we performed an extensive A/B test with real users to evaluate the explanation we generate.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.