2019
DOI: 10.1016/j.eswa.2019.06.046
|View full text |Cite
|
Sign up to set email alerts
|

An effective distributed predictive model with Matrix factorization and random forest for Big Data recommendation systems

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
14
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
8
1

Relationship

0
9

Authors

Journals

citations
Cited by 31 publications
(16 citation statements)
references
References 60 publications
0
14
0
Order By: Relevance
“…They integrate several weak classification decision trees into a strong classifier. More specifically, each decision tree generates an independent classification and the final result is the one that received the majority of the votes among all the decision trees [62,63].…”
Section: Machine Learning Modelsmentioning
confidence: 99%
“…They integrate several weak classification decision trees into a strong classifier. More specifically, each decision tree generates an independent classification and the final result is the one that received the majority of the votes among all the decision trees [62,63].…”
Section: Machine Learning Modelsmentioning
confidence: 99%
“…For a classification task, samples to be classified are used as inputs and each decision tree generates an independent classification result. The final classification labels are determined by the majority voting of all the decision trees [30], [37], [38]. Therefore, the RF algorithm can overcome some limitations of single decision tree and has been widely applied in the load forecasting [1], [39].…”
Section: B Load Consumption Pattern Determination Based On Random Forestmentioning
confidence: 99%
“…In preprocessing the review data, we attempt to clean our review data as much as possible using the natural language toolkit (NLTK) 2 in Python. The idea is to remove the punctuation, numbers and special characters in one step using regex replace, which will replace everything, except letters, with a space.…”
Section: B Review Data Preprocessingmentioning
confidence: 99%
“…The operation of missing rating prediction in the user-item interaction will be drastically reduced if the data are limited. The sparse data leads to generate potentially untrustworthy predictions [2], [35]. Although CF models can effectively predict temporal rating data exist, their capability of tracking the dynamics of user preferences is limited on sparse data, and there are few dynamic CF models that both consider drift on the basis of users' preferences and handle data sparsity [53], [60].…”
Section: Introductionmentioning
confidence: 99%