“…The second dataset we used is a core-10 Yahoo Movies 1 which contains 173,676 ratings on 2,131 movies provided by 7,012 users. Similarly, 1 https://webscope.sandbox.yahoo.com/catalog.php?datatype=r in this dataset each movie is associated with at least one genre with a total of 24 genres in the entire datatset.…”
Recently there has been a growing interest in fairness-aware recommender systems including fairness in providing consistent performance across different users or groups of users. A recommender system could be considered unfair if the recommendations do not fairly represent the tastes of a certain group of users while other groups receive recommendations that are consistent with their preferences. In this paper, we use a metric called miscalibration for measuring how a recommendation algorithm is responsive to users' true preferences and we consider how various algorithms may result in different degrees of miscalibration for different users. In particular, we conjecture that popularity bias which is a wellknown phenomenon in recommendation is one important factor leading to miscalibration in recommendation. Our experimental results using two real-world datasets show that there is a connection between how different user groups are affected by algorithmic popularity bias and their level of interest in popular items. Moreover, we show that the more a group is affected by the algorithmic popularity bias, the more their recommendations are miscalibrated.
“…The second dataset we used is a core-10 Yahoo Movies 1 which contains 173,676 ratings on 2,131 movies provided by 7,012 users. Similarly, 1 https://webscope.sandbox.yahoo.com/catalog.php?datatype=r in this dataset each movie is associated with at least one genre with a total of 24 genres in the entire datatset.…”
Recently there has been a growing interest in fairness-aware recommender systems including fairness in providing consistent performance across different users or groups of users. A recommender system could be considered unfair if the recommendations do not fairly represent the tastes of a certain group of users while other groups receive recommendations that are consistent with their preferences. In this paper, we use a metric called miscalibration for measuring how a recommendation algorithm is responsive to users' true preferences and we consider how various algorithms may result in different degrees of miscalibration for different users. In particular, we conjecture that popularity bias which is a wellknown phenomenon in recommendation is one important factor leading to miscalibration in recommendation. Our experimental results using two real-world datasets show that there is a connection between how different user groups are affected by algorithmic popularity bias and their level of interest in popular items. Moreover, we show that the more a group is affected by the algorithmic popularity bias, the more their recommendations are miscalibrated.
“…The precision function can be formulated as follows. (8) Recall describes the probability of a relevant item selected in a recommendation list. The recall function can be formulated as follows.…”
Section: Many-objective Optimizationmentioning
confidence: 99%
“…In addition, CTR and GMV are two important objectives that are not entirely consistent. A CTR-optimal or GMV-optimal recommendation can be rather suboptimal or even poor in terms of the other objective [8].…”
Section: Introductionmentioning
confidence: 99%
“…Taking two chromosomes as an example, chromosome 1 is[1,5,7,9,6,12,2,1,5,4,3,7,8,1,2], and chromosome 2 is[5,7,4,1,2,10,6,1,3,7,4,3,2,8,9]. Assume this is a top-5 recommendation from three users, and the number of cut points is 2.…”
Most traditional recommender systems focus specifically on increasing consumer satisfaction by providing a list of relevant content to consumers. However, the perspectives of other multisided marketplace stakeholders are also equally important, i.e., the exposure for suppliers or providers and profit for the platform. The suppliers want their products to be presented to users, and the objective of the platform is to maximize their profit. Nevertheless, because consumers' preferences and the objectives of providers as well as the platform may conflict with each other, it degrades the utility of the recommendation methods by only considering users' views. Therefore, in this work, we use a many-objective optimization method to maintain a tradeoff among five objectives for three stakeholders and obtain multiple Pareto front solutions in a single run. We first combine customer lifetime value and user purchase preference to create a new similarity model (Sim_RFMP) to increase the recommendation accuracy of the recommendation list. Furthermore, we propose a many-objective model (NBHXMAOEA) for multistakeholder recommendation. In NBHXMAOEA, we present a novel N-block heuristic crossover operator (NBHX) that recombines blocks of chromosomes based on heuristics. Through extensive experiments, the results demonstrate that our proposed NBHXMAOEA achieves superior performance in terms of average accuracy, diversity, novelty, provider coverage, and platform profit to its competing methods. INDEX TERMS Many-objective, recommender systems, similarity model, stakeholders.
“…In case of authors In (5) , researched that the Google home and Alexa didn't even exist three years ago. He also predicts that 33 million voices -first devices by the end of 2017 will be in circulation.…”
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.