Recommender systems use data on past user preferences to predict possible future likes and interests. A key challenge is that while the most useful individual recommendations are to be found among diverse niche objects, the most reliably accurate results are obtained by methods that recommend objects based on user or object similarity. In this paper we introduce a new algorithm specifically to address the challenge of diversity and show how it can be used to resolve this apparent dilemma when combined in an elegant hybrid with an accuracy-focused algorithm. By tuning the hybrid appropriately we are able to obtain, without relying on any semantic or context-specific information, simultaneous gains in both accuracy and diversity of recommendations.etting what you want, as the saying goes, is easy; the hard part is working out what it is that you want in the first place (1). Whereas information filtering tools like search engines typically require the user to specify in advance what they are looking for (2-5), this challenge of identifying user needs is the domain of recommender systems (5-8), which attempt to anticipate future likes and interests by mining data on past user activities.Many diverse recommendation techniques have been developed, including collaborative filtering (6, 9), content-based analysis (10), spectral analysis (11, 12), latent semantic models and Dirichlet allocation (13,14), and iterative self-consistent refinement (15-17). What most have in common is that they are based on similarity, either of users or objects or both: for example, e-commerce sites such as Amazon.com use the overlap between customers' past purchases and browsing activity to recommend products (18,19), while the TiVo digital video system recommends TV shows and movies on the basis of correlations in users' viewing patterns and ratings (20). The risk of such an approach is that, with recommendations based on overlap rather than difference, more and more users will be exposed to a narrowing band of popular objects, while niche items that might be very relevant will be overlooked.The focus on similarity is compounded by the metrics used to assess recommendation performance. A typical method of comparison is to consider an algorithm's accuracy in reproducing known user opinions that have been removed from a test dataset. An accurate recommendation, however, is not necessarily a useful one: real value is found in the ability to suggest objects users would not readily discover for themselves, that is, in the novelty and diversity of recommendation (21). Despite this, most studies of recommender systems focus overwhelmingly on accuracy as the only important factor [for example, the Netflix Prize (22) challenged researchers to increase accuracy without any reference to novelty or personalization of results]. Where diversification is addressed, it is typically as an adjunct to the main recommendation process, based on restrictive features such as semantic or other context-specific information (23, 24).The clear concern is that an alg...
In this paper we review several novel approaches for research evaluation. We start with a brief overview of the peer review, its controversies, and metrics for assessing efficiency and overall quality of the peer review. We then discuss five approaches, including reputation-based ones, that come out of the research carried out by the LiquidPub project and research groups collaborated with LiquidPub. Those approaches are alternative or complementary to traditional peer review. We discuss pros and cons of the proposed approaches and conclude with a vision for the future of the research evaluation, arguing that no single system can suit all stakeholders in various communities.
When users rate objects, a sophisticated algorithm that takes into account ability or reputation may produce a fairer or more accurate aggregation of ratings than the straightforward arithmetic average. Recently a number of authors have proposed different co-determination algorithms where estimates of user and object reputation are refined iteratively together, permitting accurate measures of both to be derived directly from the rating data. However, simulations demonstrating these methods' efficacy assumed a continuum of rating values, consistent with typical physical modelling practice, whereas in most actual rating systems only a limited range of discrete values (such as a 5-star system) is employed. We perform a comparative test of several co-determination algorithms with different scales of discrete ratings and show that this seemingly minor modification in fact has a significant impact on algorithms' performance. Paradoxically, where rating resolution is low, increased noise in users' ratings may even improve the overall performance of the system.
We investigate the behavioral patterns of a population of agents, each controlled by a simple biologically motivated neural network model, when they are set in competition against each other in the Minority Model of Challet and Zhang. We explore the effects of changing agent characteristics, demonstrating that crowding behavior takes place among agents of similar memory, and show how this allows unique 'rogue' agents with higher memory values to take advantage of a majority population. We also show that agents' analytic capability is largely determined by the size of the intermediary layer of neurons.In the context of these results, we discuss the general nature of natural and artificial intelligence systems, and suggest intelligence only exists in the context of the surrounding environment (embodiment).Source code for the programs used can be found at http://neuro.webdrake.net/.
The unprecedented access offered by the World Wide Web brings with it the potential to gather huge amounts of data on human activities. Here we exploit this by using a toy model of financial markets, the Minority Game (MG), to investigate human speculative trading behaviour and information capacity. Hundreds of individuals have played a total of tens of thousands of game turns against computer-controlled agents in the Web-based Interactive Minority Game. The analytical understanding of the MG permits fine-tuning of the market situations encountered, allowing for investigation of human behaviour in a variety of controlled environments. In particular, our results indicate a transition in players' decision-making, as the markets become more difficult, between deductive behaviour making use of short-term trends in the market, and highly repetitive behaviour that ignores entirely the market history, yet outperforms random decision-making. 02.50.Le; 89.65.Gh; 89.70.+c Keywords: Decision theory and game theory; Economics and financial markets; Information theory; PACS: Internet experimentsExperimental games and their theoretical offspring have been a fruitful research direction for various disciplines, particularly psychology [1-6] and economics [7][8][9][10][11], but elsewhere as well [12][13][14]. The advantage of this approach is that the simplified game environment allows for controlled investigation of human behaviour while still potentially maintaining the essential features of real-world situations. A notable example in recent years has been the so-called "market entry" games [9,11], where traders must decide whether or not to join a market based on knowledge of its capacity and of their competitors' past actions. These games have generated much interest as examples of situations where the insight provided by classical economic theory is limited, and an experimental approach was thought essential [11,15,16 By coincidence, a theoretical approach has been developed by the statistical physics community for an independently created game that has many similarities to the market entry class: the Minority Game (MG) [17]. Economic agents are endowed with simple strategies and learn inductively, as suggested by Arthur [18]. A rich market dynamics emerges, whose properties depend on only a few simple parameters [19][20][21]. These results have recently led some authors to return to more traditional experiments, playing the MG with small groups of humans [22,23].Our approach here has instead been to make use of the understanding of the theoretical game, by having individual humans play against computer-controlled "MG agents". We can thus fine-tune the market situation the player encounters, and provide a variety of controlled environments in which to investigate human behaviour. Because we only ever engage individual players, we have been able to make use of the immense access provided by the World Wide Web 1 , presenting the game via an online interface. Since being launched a year ago [25], hundreds of players...
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.