2021
DOI: 10.1155/2021/7194728
|View full text |Cite
|
Sign up to set email alerts
|

Solving Misclassification of the Credit Card Imbalance Problem Using Near Miss

Abstract: In ordinary credit card datasets, there are far fewer fraudulent transactions than ordinary transactions. In dealing with the credit card imbalance problem, the ideal solution must have low bias and low variance. The paper aims to provide an in-depth experimental investigation of the effect of using a hybrid data-point approach to resolve the class misclassification problem in imbalanced credit card datasets. The goal of the research was to use a novel technique to manage unbalanced datasets to improve the eff… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
12
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
3
3

Relationship

0
10

Authors

Journals

citations
Cited by 33 publications
(19 citation statements)
references
References 49 publications
0
12
0
Order By: Relevance
“…As there is a huge difference in ratio of positive to negative classes, state of the art techniques were implemented to address class imbalance. Two strategies namely Synthetic Minority Over-sampling Technique (SMOTE) (44) and Near-Miss (NM) (45, 46) were used to balance the dataset. Packages in python are available for implementation of both techniques (47).…”
Section: Methodsmentioning
confidence: 99%
“…As there is a huge difference in ratio of positive to negative classes, state of the art techniques were implemented to address class imbalance. Two strategies namely Synthetic Minority Over-sampling Technique (SMOTE) (44) and Near-Miss (NM) (45, 46) were used to balance the dataset. Packages in python are available for implementation of both techniques (47).…”
Section: Methodsmentioning
confidence: 99%
“…Accuracy measures the proportion of correctly classi ed instances out of the total instances and provides a fundamental indicator of overall model performance. However, when dealing with imbalanced data, where the number of legitimate transactions far outweighs fraudulent ones, accuracy alone may not be the most informative metric (Mqadi et al, 2021). In such cases, precision, recall, and F1-score become more relevant as they take into account false positives and false negatives, which are critical in fraud detection.…”
Section: Comparative Analysis Of Machine Learning Modelsmentioning
confidence: 99%
“…As there is a huge difference in ratio of positive to negative classes, state of the art techniques were implemented to address class imbalance. Two strategies namely Synthetic Minority Over-sampling Technique (SMOTE) 39 and Near-Miss (NM) 40,41 were used to balance the dataset. Packages in python are available for implementation of both techniques 42 .…”
Section: Handling Class Imbalancementioning
confidence: 99%