2022
DOI: 10.32604/csse.2022.020043
|View full text |Cite
|
Sign up to set email alerts
|

Ensemble Variable Selection for Naive Bayes to Improve Customer Behaviour Analysis

Abstract: Executing customer analysis in a systemic way is one of the possible solutions for each enterprise to understand the behavior of consumer patterns in an efficient and in-depth manner. Further investigation of customer patterns helps the firm to develop efficient decisions and in turn, helps to optimize the enterprise's business and maximizes consumer satisfaction correspondingly. To conduct an effective assessment about the customers, Naive Bayes(also called Simple Bayes), a machine learning model is utilized.… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
3
3
2

Relationship

0
8

Authors

Journals

citations
Cited by 15 publications
(3 citation statements)
references
References 20 publications
0
3
0
Order By: Relevance
“…To capture the relevant variable subset, an appropriate threshold (t) was applied to the final aggregated sorted list. The variable subset in this investigation was chosen using a 50% threshold value [14]. Lastly, the collected variable subset was modelled with the Nave Bayes classifier, and the resulting results were projected with various validity scores.…”
Section: Aggregation Methods and Cut-off Valuementioning
confidence: 99%
“…To capture the relevant variable subset, an appropriate threshold (t) was applied to the final aggregated sorted list. The variable subset in this investigation was chosen using a 50% threshold value [14]. Lastly, the collected variable subset was modelled with the Nave Bayes classifier, and the resulting results were projected with various validity scores.…”
Section: Aggregation Methods and Cut-off Valuementioning
confidence: 99%
“…In conclusion, XGB is a strong & adaptable algorithm that can be used to a variety of ML problems. It is a desirable option for many applications because to its handling of missing data, regularization approaches, and scalability [42]. Hyperparameter applied are: learning_rate=0.1(The learning rate controls the step size of updates during the boosting process), n_estimators=100(This parameter determines the number of boosting rounds), subsample=1.0(Subsample specifies the fraction of samples to be used for training each tree), Scolsample_bytree=1.0(Colsample_bytree specifies the fraction of features to be randomly sampled for training each tree) e) GNB: The GNB.…”
Section: 3ml Modelsmentioning
confidence: 99%
“…Since the user-movie rating matrix is a coefficient matrix with very high dimensions, different algorithms are used for decomposition, including the SGD, the SGLD, and the SGHMC. Among them, SGD is an optimization-based method, and SGLD and SGHMC are sampling-based Bayesian [25] probability matrix decomposition algorithms. The final results are shown in Tab.…”
Section: Recommendation Algorithm Based On Svdmentioning
confidence: 99%