2013
DOI: 10.19026/rjaset.6.4092
|View full text |Cite
|
Sign up to set email alerts
|

Improved Fuzzy C-Means Clustering for Personalized Product Recommendation

Abstract: With rapid development of e-commerce, how to better understand users' needs to provide more satisfying personalized services has become a crucial issue. To overcome the problem, this study presents a novel approach for personalized product recommendation based on Fuzzy C-Means (FCM) clustering. Firstly, the traditional FCM clustering algorithm is improved by membership adjustment and density function, in order to address the issues that the number of clusters is difficult to determine and the convergence of ob… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
4
0

Year Published

2014
2014
2023
2023

Publication Types

Select...
3
1

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(4 citation statements)
references
References 13 publications
0
4
0
Order By: Relevance
“…AdaBoost forms a linear combination 20 of selected classifier instances to create an overall ensemble. AdaBoost-based ensembles rarely over-fit a solution even if a large number of base classifiers in-stance are used ( Lei & He, 2017 ) and it minimizes an exponential loss function by fitting a stage-wise additive model ( Wu & Wu, 2013 ). As the minimization of classification error implies an optimization of a non-smooth, non-differentiable cost function which 25 can be best approximated by an exponential loss ( Koren, Bell & Volinsky, 2009 ), AdaBoost therefore per-forms extremely well over a wide range of classification problems.…”
Section: Methodsmentioning
confidence: 99%
“…AdaBoost forms a linear combination 20 of selected classifier instances to create an overall ensemble. AdaBoost-based ensembles rarely over-fit a solution even if a large number of base classifiers in-stance are used ( Lei & He, 2017 ) and it minimizes an exponential loss function by fitting a stage-wise additive model ( Wu & Wu, 2013 ). As the minimization of classification error implies an optimization of a non-smooth, non-differentiable cost function which 25 can be best approximated by an exponential loss ( Koren, Bell & Volinsky, 2009 ), AdaBoost therefore per-forms extremely well over a wide range of classification problems.…”
Section: Methodsmentioning
confidence: 99%
“…The selected classifiers are combined to form a strong ensemble. AdaBoost-based ensembles rarely overfit a solution even if a large number of base classifier instances are used [13] and it minimizes an exponential loss function by fitting a stage-wise additive model [14]. As the minimization of classification error implies an optimization of a non-smooth, non-differentiable cost function that 25 can be best approximated by an exponential loss [15].…”
Section: Ensemble Learningmentioning
confidence: 99%
“…Following the extraction, classification of the browsing patterns and prediction of user future requests started. Wu and Wu (2013) adjusted the membership and density functions and improved the conventional C-means clustering algorithm in order to solve problem in which the number of clusters used to determine the convergence of the objective function was inadequate. Next, personal preferences were divided into several groups in a way that users with similar preferences were put into the same group.…”
Section: Related Workmentioning
confidence: 99%
“…The vector of user browsing patterns shows a condensed view of the behavior of a group of users based on their common interests and information needs (Wu et al, 2013). These movement patterns are used to determine the similarity between the new profiles and previous ones.…”
Section: Fig 2 Fcm Clustering Algorithm With S(c) Criteriamentioning
confidence: 99%