Advancements in computing and storage technologies have significantly contributed to the adoption of deep learning (DL)-based models among machinelearning (ML) experts. Although a generic model can be used in the search fora near-optimal solution in any problem domain, what makes these DL modelscontext-sensitive is the combination of the training data and the hyperparameters. Due to the lack of inherent explainability of DL models the HyperparameterOptimization (HPO) or tuning specific to each model is a combination of art,science, and experience. In this article, we have explored various existing methods or ways to identify the optimal set of values for the hyperparameters specificto the DL models along with the techniques to realize those methods in real-lifesituations. The article also includes a detailed comparative study among variousstate-of-the-art HPO techniques using the Keras Tuner tuning toolkit and highlights the observations describing how the model performance can be improvedby applying various HPO techniques.