Information technology has spread widely, and extraordinarily large amounts of data have been made accessible to users, which has made it challenging to select data that are in accordance with user needs. For the resolution of the above issue, recommender systems have emerged, which much help users go through the process of decision-making and selecting relevant data. A recommender system predicts users' behavior to be capable of detecting their interests and needs, and it often uses the classification technique for this purpose. It may not be sufficiently accurate to employ individual classification, where not all cases can be examined, which makes the method inappropriate to specific problems. In this research, group classification and the ensemble learning technique were used for increasing prediction accuracy in recommender systems. Another issue that is raised here concerns user analysis. Given the large size of the data and a large number of users, the process of user needs analysis and prediction (using a graph in most cases, representing the relations between users and their selected items) is complicated and cumbersome in recommender systems. Graph embedding was also proposed for resolution of this issue, where all or part of user behavior can be simulated through the generation of several vectors, resolving the problem of user behavior analysis to a large extent while maintaining high efficiency. In this research, individuals most similar to the target user were classified using ensemble learning, fuzzy rules, and the decision tree, and relevant recommendations were then made to each user with a heterogeneous knowledge graph and embedding vectors. This study was performed on the MovieLens datasets, and the obtained results indicated the high efficiency of the presented method.
The feature selection is an essential data preprocessing stage in data mining. The core principle of feature selection seems to be to pick a subset of possible features by excluding features with almost no predictive information as well as highly associated redundant features. In the past several years, a variety of meta-heuristic methods were introduced to eliminate redundant and irrelevant features as much as possible from high-dimensional datasets. Among the main disadvantages of present meta-heuristic based approaches is that they are often neglecting the correlation between a set of selected features. In this article, for the purpose of feature selection, the authors propose a genetic algorithm based on community detection, which functions in three steps. The feature similarities are calculated in the first step. The features are classified by community detection algorithms into clusters throughout the second step. In the third step, features are picked by a genetic algorithm with a new community-based repair operation. Nine benchmark classification problems were analyzed in terms of the performance of the presented approach. Also, the authors have compared the efficiency of the proposed approach with the findings from four available algorithms for feature selection. Comparing the performance of the proposed method with three new feature selection methods based on PSO, ACO, and ABC algorithms on three classifiers showed that the accuracy of the proposed method is on average 0.52% higher than the PSO, 1.20% higher than ACO, and 1.57 higher than the ABC algorithm.
In the past decades, the rapid growth of computer and database technologies has led to the rapid growth of large-scale datasets. On the other hand, data mining applications with high dimensional datasets that require high speed and accuracy are rapidly increasing. Semi-supervised learning is a class of machine learning in which unlabeled data and labeled data are used simultaneously to improve feature selection. The goal of feature selection over partially labeled data (semi-supervised feature selection) is to choose a subset of available features with the lowest redundancy with each other and the highest relevancy to the target class, which is the same objective as the feature selection over entirely labeled data. This method actually used the classification to reduce ambiguity in the range of values. First, the similarity values of each pair are collected, and then these values are divided into intervals, and the average of each interval is determined. In the next step, for each interval, the number of pairs in this range is counted. Finally, by using the strength and similarity matrices, a new constraint feature selection ranking is proposed. The performance of the presented method was compared to the performance of the state-of-the-art, and well-known semi-supervised feature selection approaches on eight datasets. The results indicate that the proposed approach improves previous related approaches with respect to the accuracy of the constrained score. In particular, the numerical results showed that the presented approach improved the classification accuracy by about 3% and reduced the number of selected features by 1%. Consequently, it can be said that the proposed method has reduced the computational complexity of the machine learning algorithm despite increasing the classification accuracy.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.