2017 International Conference on Computational Science and Computational Intelligence (CSCI) 2017
DOI: 10.1109/csci.2017.36
|View full text |Cite
|
Sign up to set email alerts
|

An Improved Bank Credit Scoring Model: A Naïve Bayesian Approach

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
5
0
3

Year Published

2019
2019
2023
2023

Publication Types

Select...
7
2
1

Relationship

0
10

Authors

Journals

citations
Cited by 18 publications
(8 citation statements)
references
References 19 publications
0
5
0
3
Order By: Relevance
“…Managing the existing knowledge flow in tertiary institutions is essential. According to Kayıkçı and Ozan 1 , knowledge is a powerful tool for organisational competition and therefore becomes significant to every industry including banking, education and governmental sectors [2][3][4][5] . Knowledge generated should be properly managed to ensure its future availability.…”
Section: V1mentioning
confidence: 99%
“…Managing the existing knowledge flow in tertiary institutions is essential. According to Kayıkçı and Ozan 1 , knowledge is a powerful tool for organisational competition and therefore becomes significant to every industry including banking, education and governmental sectors [2][3][4][5] . Knowledge generated should be properly managed to ensure its future availability.…”
Section: V1mentioning
confidence: 99%
“…It uses the raw text document for training purposes, and the classifier uses the vectorized training data supplied by the vectorizer [17]. Naive Bayes vectorization focuses on the Bayes formula with presumed independence among predictors, using a set of training data to calculate the posterior probability, which is calculating the likelihood and estimates the probability terms needed for classification [18] [19]. In the context of document classification, the Naive Bayes vectorization uses the probability of a particular document being annotated to a particular category, given that the document contains certain words in it, is equal to the probability of finding those particular words in that category, times the probability that any document is annotated to that category, divided by the probability of finding those words in any document [20][21], as shown in equation ( 1):…”
Section: Naïve Bayes Vectorizationmentioning
confidence: 99%
“…Algoritma decision tree juga memiliki nilai akurasi yang baik pada penelitian loyalitas pelanggan [3]. Penggunaan decision tree pada penelitian peningkatan kredit bank [4] juga telah dilakukan dengan tingkat akurasi mencapai 82%. Penelitan dengan logistic regression pada dataset resiko kartu kredit [5] menunjukkan bahwa nilai akurasinya mencapai 74% dan 70% pada algoritma Decision Tree.…”
Section: Pendahuluanunclassified