This study's objective is to offer a practical computer method for handling classification problems on large datasets. The aim of this study is to offer a practical computer approach for handling classification tasks on big datasets. We show that using Python’s built-in parameters to balance classes can improve the accuracy and the metrics of a classification task. We employ logistic regression, support vector machines, decision trees, and random forest classifier. We use the parameter “class_weight='balanced'” to run each classification model as well as stratified train/test splitting to ensure that relative class frequencies are approximately preserved in each train and set subsets. We use our methods on medical datasets because class imbalance is frequently a problem there. Our research shows that the proposed algorithms can improve the accuracy and classification metrics of the given medical datasets. We propose an effective and easy-to-apply alternative to improve the prediction ability of the presented classification models in medical datasets. We test an easily reproducible set-up where any classification model can be used to model imbalanced classes. The key tuning of the model lies in the stratified train/test split as well as the parameter “class weight='balanced'”. By combination of parameter tuning, better classification performance can be obtained in a quick and simple manner. It is simple and quick to replicate our algorithms to examine various medical datasets and determine which model best fits the data. It can be reproduced in biostatistical laboratories and by medical companies. Because it is simple to comprehend, medical researchers can swiftly review the information and determine the best course of action.
Principal components analysis (PCA) is often used as a dimensionality reduction technique. A small number of principal components is selected to be used in a classification or a regression model to boost accuracy. A central issue in the PCA is how to select the number of principal components. Existing algorithms often result in contradictions and the researcher needs to manually select the final number of principal components to be used. In this research the author proposes a novel algorithm that automatically selects the number of principal components. This is achieved based on a combination of ANOVA ranking of principal components, the bootstrap and classification models. Unlike the classical approach, the algorithm we propose improves the accuracy of the logistic regression and selects the best combination of principal components that may not necessarily be ordered. The ANOVA bootstrapped PCA classification we propose is novel as it automatically selects the number of principal components that would maximise the accuracy of the classification model.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.