In this work we present topic diversification, a novel method designed to balance and diversify personalized recommendation lists in order to reflect the user's complete spectrum of interests. Though being detrimental to average accuracy, we show that our method improves user satisfaction with recommendation lists, in particular for lists generated using the common item-based collaborative filtering algorithm.Our work builds upon prior research on recommender systems, looking at properties of recommendation lists as entities in their own right rather than specifically focusing on the accuracy of individual recommendations. We introduce the intra-list similarity metric to assess the topical diversity of recommendation lists and the topic diversification approach for decreasing the intra-list similarity. We evaluate our method using book recommendation data, including offline analysis on 361, 349 ratings and an online study involving more than 2, 100 subjects.
As users continue offloading more control and responsibility to the computer, coordinating the asynchronous interactions between the user and computer is becoming increasingly important. Without proper coordination, an application attempting to gain the user's attention risks interrupting the user in the midst of performing another task. To justify why an application should avoid interrupting the user whenever possible, we designed an experiment measuring the disruptive effect of an interruption on a user's task performance. The experiment utilized six web-based task categories and two categories of interruption tasks. The results of the experiment demonstrate that (i) a user performs slower on an interrupted task than a non-interrupted task, (ii) the disruptive effect of an interruption differs as a function of task category, and (iii) different interruption tasks cause similar disruptive effects on task performance. These results empirically validate the need to better coordinate user interactions among applications that are competing for the user's attention.
The aim of this paper is to advance rigorous Internet-based HIV/STD Prevention quantitative research by providing guidance to fellow researchers, faculty supervising graduates, human subjects' committees, and review groups about some of the most common and challenging questions about Internet-based HIV prevention quantitative research. The authors represent several research groups who have gained experience conducting some of the first Internet-based HIV/STD prevention quantitative surveys in the US and elsewhere. Sixteen questions specific to Internet-based HIV prevention survey research are identified. To aid rigorous development and review of applications, these questions are organized around six common criteria used in federal review groups in the US: significance, innovation, approach (broken down further by research design, formative development, procedures, sampling considerations, and data collection); investigator, environment and human subjects' issues. Strategies promoting minority participant recruitment, minimizing attrition, validating participants, and compensating participants are discussed. Throughout, the implications on budget and realistic timetabling are identified.
Collaborative filtering has proven to be valuable for recommending items in many different domains. In this paper, we explore the use of collaborative filtering to recommend research papers, using the citation web between papers to create the ratings matrix. Specifically, we tested the ability of collaborative filtering to recommend citations that would be suitable additional references for a target research paper. We investigated six algorithms for selecting citations, evaluating them through offline experiments against a database of over 186,000 research papers contained in ResearchIndex. We also performed an online experiment with over 120 users to gauge user opinion of the effectiveness of the algorithms and of the utility of such recommendations for common research tasks. We found large differences in the accuracy of the algorithms in the offline experiment, especially when balanced for coverage. In the online experiment, users felt they received quality recommendations, and were enthusiastic about the idea of receiving recommendations in this domain.
In a web‐based, sexual behavior risk study using a rigorous response validation protocol, we identified 124 invalid responses out of 1,150 total (11% rejection). Nearly all of these (119) were due to repeat survey submissions from the same participants, and 65 of them came from a single participant. This brief describes how we were able to detect these repeat submissions using the validation protocol, and highlights the importance of using both automated and manual validation techniques
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.