2011
DOI: 10.1504/ijeb.2011.042542
|View full text |Cite
|
Sign up to set email alerts
|

Information asset analysis: credit scoring and credit suggestion

Abstract: Risk assessment is important for financial institutions, especially in loan applications. Some have already implemented their own credit-scoring mechanisms to evaluate their clients' risk and make decisions based on this indicator. In fact, the data gathered by financial institutions is a valuable source of information to create information assets, from which credit-scoring mechanisms can be developed. The purpose of this paper is to create, from information assets, a decision mechanism that is able to evaluat… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2017
2017
2022
2022

Publication Types

Select...
2
1

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(2 citation statements)
references
References 13 publications
(14 reference statements)
0
2
0
Order By: Relevance
“…This is a generic idea which may have different implementations depending on the algorithm being studied. In the literature this approach can be found in linear classification algorithms [11], where a linear Machine Learning algorithm is exploited to find how changes in coefficients or inputs change the final decision, as well as in black box models such as mutlilayer perceptrons [12]. Indeed, explanations are often more valuable when it comes to black box models, as there is here, clearly, a trade-off between interpretability and accuracy [13]: models that are generally more accurate, such as Deep Learning, are usually also harder to explain.…”
Section: Explainability In Machine Learningmentioning
confidence: 99%
“…This is a generic idea which may have different implementations depending on the algorithm being studied. In the literature this approach can be found in linear classification algorithms [11], where a linear Machine Learning algorithm is exploited to find how changes in coefficients or inputs change the final decision, as well as in black box models such as mutlilayer perceptrons [12]. Indeed, explanations are often more valuable when it comes to black box models, as there is here, clearly, a trade-off between interpretability and accuracy [13]: models that are generally more accurate, such as Deep Learning, are usually also harder to explain.…”
Section: Explainability In Machine Learningmentioning
confidence: 99%
“…Black box models, such as mutlilayer perceptrons, can also embed this approach. In [9], a genetic algorithm is used to search an output domain to provide suggestions for credit risk assessment, which can be perceived as an approach to interpret and explain a neural network decision process. This approach is similar to a technique known as LIME: Local Interpretable Model-Agnostic Explanations [10], which develops an approximation of the model by testing what happens when certain aspects within the input of the model are changed.…”
Section: Approaches To Enhance Explainability and Interpretabilitymentioning
confidence: 99%