Convolutional Neural Networks (CNNs) have gained widespread recognition in the field of computer vision, specifically for handwritten digit recognition. Despite their remarkable accuracy, CNNs entail significant computational training demands and are susceptible to local optima, necessitating innovative optimization strategies. This research introduces a novel approach to hyperparameter tuning for CNN models tailored to the recognition of English-Devanagari handwritten digits. This method combines a Hybrid Evolutionary Algorithm (HEA) with a Variable Length Genetic Algorithm (VLGA) and leverages a comprehensive dataset encompassing both English and Devanagari handwritten digits. The strategy extends the conventional paradigm by integrating a variable-length GA, facilitating systematic and adaptive tuning of critical CNN hyperparameters, including optimizer selection, learning rate, global hyperparameters, kernel size, filter count, activation functions, layer count, and pooling mechanisms. Extensive experimentation across benchmark datasets underscores the superior performance of the proposed approach when compared to traditional optimization methods. Seven key hyperparameters are the focal point of optimization efforts: learning rate, optimizer, kernel size, filter count, activation function, layer count, and pooling strategy. The results highlight the significant performance boost achieved by CNNs assisted by genetic algorithms, underscoring the effectiveness of evolutionary approaches in CNN training. Notably, experiments reveal that a population size of 27 yields optimal fitness values and average fitness scores. In the culmination of this research, the HEA-VLGA model achieves an impressive accuracy rate of approximately 99.38%. These findings unequivocally affirm the efficacy of incorporating evolutionary techniques as a potent avenue for enhancing CNN training processes.