Pairwise learning-to-rank algorithms have been shown to allow recommender systems to leverage unary user feedback. We propose Multi-feedback Bayesian Personalized Ranking (MF-BPR), a pairwise method that exploits different types of feedback with an extended sampling method. The feedback types are drawn from different "channels", in which users interact with items (e.g., clicks, likes, listens, follows, and purchases). We build on the insight that different kinds of feedback, e.g., a click versus a like, reflect different levels of commitment or preference. Our approach differs from previous work in that it exploits multiple sources of feedback simultaneously during the training process. The novelty of MF-BPR is an extended sampling method that equates feedback sources with "levels" that reflect the expected contribution of the signal. We demonstrate the effectiveness of our approach with a series of experiments carried out on three datasets containing multiple types of feedback. Our experimental results demonstrate that with a right sampling method, MF-BPR outperforms BPR in terms of accuracy. We find that the advantage of MF-BPR lies in its ability to leverage level information when sampling negative items.
Abstract. Factorization machines offer an advantage over other existing collaborative filtering approaches to recommendation. They make it possible to work with any auxiliary information that can be encoded as a real-valued feature vector as a supplement to the information in the user-item matrix. We build on the assumption that different patterns characterize the way that users interact with (i.e., rate or download) items of a certain type (e.g., movies or books). We view interactions with a specific type of item as constituting a particular domain and allow interaction information from an auxiliary domain to inform recommendation in a target domain. Our proposed approach is tested on a data set from Amazon and compared with a state-of-the-art approach that has been proposed for Cross-Domain Collaborative Filtering. Experimental results demonstrate that our approach, which has a lower computational complexity, is able to achieve performance improvements.
Abstract. We developed a learning-based question classifier for question answering systems. A question classifier tries to predict the entity type of the possible answers to a given question written in natural language. We extracted several lexical, syntactic and semantic features and examined their usefulness for question classification. Furthermore we developed a weighting approach to combine features based on their importance. Our result on the well-known TREC questions dataset is competitive with the state-of-the-art on this task.
Abstract. This study aims to develop a recommender system for social learning platforms that combine traditional learning management systems with commercial social networks like Facebook. We therefore take into account social interactions of users to make recommendations on learning resources. We propose to make use of graph-walking methods for improving performance of the wellknown baseline algorithms. We evaluate the proposed graph-based approach in terms of their F1 score, which is an effective combination of precision and recall as two fundamental metrics used in recommender systems area. The results show that the graph-based approach can help to improve performance of the baseline recommenders; particularly for rather sparse educational datasets used in this study.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.