Background: Prediction and classification algorithms are commonly used in clinical research for identifying patients susceptible to clinical conditions like diabetes, colon cancer, and Alzheimer’s disease. Developing accurate prediction and classification methods have implications for personalized medicine. Building an excellent predictive model involves selecting features that are most significantly associated with the response at hand. These features can include several biological and demographic characteristics, such as genomic biomarkers and health history. Such variable selection becomes challenging when the number of potential predictors is large. Bayesian shrinkage models have emerged as popular and flexible methods of variable selection in regression settings. The article discusses variable selection with three shrinkage priors and illustrates its application to clinical data sets such as Pima Indians Diabetes, Colon cancer, ADNI, and OASIS Alzheimer’s data sets. Methods: We present a unified Bayesian hierarchical framework that implements and compares shrinkage priors in binary and multinomial logistic regression models. The key feature is the representation of the likelihood by a Polya-Gamma data augmentation, which admits a natural integration with a family of shrinkage priors. We specifically focus on the Horseshoe, Dirichlet Laplace, and Double Pareto priors. Extensive simulation studies are conducted to assess the performances under different data dimensions and parameter settings. Measures of accuracy, AUC, brier score, L1 error, cross-entropy, ROC surface plots are used as evaluation criteria comparing the priors to frequentist methods like Lasso, Elastic-Net, and Ridge regression. Results: All three priors can be used for robust prediction with significant metrics, irrespective of their categorical response model choices. Simulation study could achieve the mean prediction accuracy of 91% (95% CI: 90.7, 91.2) and 74% (95% CI: 73.8,74.1) for logistic regression and multinomial logistic models, respectively. The model can identify significant variables for disease risk prediction and is computationally efficient. Conclusions: The models are robust enough to conduct both variable selection and future prediction because of their high shrinkage property and applicability to a broad range of classification problems.