2009
DOI: 10.1007/978-3-642-04595-0_26
|View full text |Cite
|
Sign up to set email alerts
|

A Non-sequential Representation of Sequential Data for Churn Prediction

Abstract: Abstract. We investigate the length of event sequence giving best predictions when using a continuous HMM approach to churn prediction from sequential data. Motivated by observations that predictions based on only the few most recent events seem to be the most accurate, a non-sequential dataset is constructed from customer event histories by averaging features of the last few events. A simple K-nearest neighbor algorithm on this dataset is found to give significantly improved performance. It is quite intuitive… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
3
0

Year Published

2012
2012
2022
2022

Publication Types

Select...
5

Relationship

1
4

Authors

Journals

citations
Cited by 6 publications
(3 citation statements)
references
References 7 publications
0
3
0
Order By: Relevance
“…The task is usually to derive the maximum likelihood estimate of the parameters of the HMM for the set of output sequences. Typical physical activities are nonsequential, and it is not easy to use HMM to recognize a single physical activity [28].…”
Section: Introductionmentioning
confidence: 99%
“…The task is usually to derive the maximum likelihood estimate of the parameters of the HMM for the set of output sequences. Typical physical activities are nonsequential, and it is not easy to use HMM to recognize a single physical activity [28].…”
Section: Introductionmentioning
confidence: 99%
“…The only configuration possible is a choice of different values for K ≥ 1, that is, how many nearest neighbours we want to use. KNNs can be used in its original non‐sequential form (Eastwood & Gabrys, ), or it can be extended into a sequential approach (Ruta et al ., ). The core idea is to find similar sequences expecting that these sequences have a common behaviour and outcome.…”
Section: Predictive Modelsmentioning
confidence: 99%
“…One modern and powerful classifier is the support vector machine (SVM), which has the advantages of being robust and delivering a unique solution because the optimality problem is convex (Auria and Moro 2008). A SVM provides more accurate activity recognition compared to other classifiers such as a hidden Markov model (Eastwood and Gabrys et al 2009) or a neural network (Lara and Labrador et al 2013). However, a SVM is expensive computationally, especially for large datasets.…”
Section: Introductionmentioning
confidence: 99%