Making accurate recommendations for cold-start users is a challenging yet important problem in recommendation systems. Including more information from other domains is a natural solution to improve the recommendations. However, most previous work in cross-domain recommendations has focused on improving prediction accuracy with several severe limitations. In this article, we extend our previous work on clustering-based matrix factorization in single domains into cross domains. In addition, we utilize recent results on unobserved ratings. Our new method can more effectively utilize data from auxiliary domains to achieve better recommendations, especially for cold-start users. For example, our method improves the recall to 21% on average for cold-start users, whereas previous methods result in only 15% recall in the cross-domain Amazon dataset. We also observe almost the same improvements in the Epinions dataset. Considering that it is often difficult to make even a small improvement in recommendations, for cold-start users in particular, our result is quite significant.
ACM Reference Format:Nima Mirbakhsh and Charles X. Ling. 2015. Improving top-N recommendation for cold-start users via crossdomain information.
Factorized collaborative models show a promising accuracy and scalability in recommendation systems. They employ the latent collaborative information of users and items to achieve higher accuracy of recommendation. In this paper, we propose a new approach to improve the accuracy of two well-known, highly scalable factorized models: SVD++ and Asymmetric-SVD++. These are cutting-edge factorized models that have played a key role in the Netflix prize winner's solution. We first employ collaborative information to categorize the users and items. We then discover the shared interests between these categories. Including this new information, we extend these cutting-edge models regarding two main goals: 1) to improve their recommendation accuracies; 2) to keep the extended models still scalable. Finally, we evaluate our proposed models on two recommendation datasets: MovieLens100k, and Netflix. Our experiment shows that adding the shared interests among categories into these models improves their accuracy while maintaining scalability.
A critical issue of Neural Network based large-scale data mining algorithms is how to speed up their learning algorithm. This problem is particularly challenging for Error Back-Propagation (EBP) algorithm in Multi-Layered Perceptron (MLP) Neural Networks due to their significant applications in many scientific and engineering problems. In this paper, we propose an Adaptive Variable Learning Rate EBP algorithm to attack the challenging problem of reducing the convergence time in an EBP algorithm, aiming to have a highspeed convergence in comparison with standard EBP algorithm. The idea is inspired from adaptive filtering, which leaded us into two semi-similar methods of calculating the learning rate. Mathematical analysis of AVLR-EBP algorithm confirms its convergence property. The AVLR-EBP algorithm is utilized for data classification applications. Simulation results on many well-known data sets shall demonstrate that this algorithm reaches to a considerable reduction in convergence time in comparison to the standard EBP algorithm. The proposed algorithm, in classifying the IRIS, Wine, Breast Cancer, Semeion and SPECT Heart datasets shows a reduction of the learning epochs relative to the standard EBP algorithm.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.