2018
DOI: 10.48550/arxiv.1802.04198
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

client2vec: Towards Systematic Baselines for Banking Applications

Abstract: The workflow of data scientists normally involves potentially inefficient processes such as data mining, feature engineering and model selection. Recent research has focused on automating this workflow, partly or in its entirety, to improve productivity. We choose the former approach and in this paper share our experience in designing the client2vec: an internal library to rapidly build baselines for banking applications. Client2vec uses marginalized stacked denoising autoencoders on current account transactio… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2018
2018
2020
2020

Publication Types

Select...
2

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(2 citation statements)
references
References 21 publications
(33 reference statements)
0
2
0
Order By: Relevance
“…Towards that end, they also use an n-skip-gram model to learn customer embeddings and track its evolution over time as purchases are made. [20] uses a stacked denoising autoencoder to learn customer embeddings for improving their campaign decisions or clustering clients into classes.…”
Section: Background and Related Work A Transaction-based Item And Cus...mentioning
confidence: 99%
“…Towards that end, they also use an n-skip-gram model to learn customer embeddings and track its evolution over time as purchases are made. [20] uses a stacked denoising autoencoder to learn customer embeddings for improving their campaign decisions or clustering clients into classes.…”
Section: Background and Related Work A Transaction-based Item And Cus...mentioning
confidence: 99%
“…From the very first day following their release, approaches like word2vec or Glove [20,21,22,23] that yield dense representations for state sequences, have significantly emphasized the value of pre-trained representations of discrete sequential data for downstream tasks [24,25]. Since then those found application not only in language modeling [26,27,28,29] but also in biology [30,31], graph analysis [32,33] and even banking [34]. Similar approaches have received an overwhelming attention and became part of the respective state-of-the-art approaches.…”
Section: Introductionmentioning
confidence: 99%