The automated transcription of handwritten characters into a legible output is a multi-faceted process with diverse applications. In this paper, a novel approach to optical character recognition (OCR) for handwritten digits is proposed that, in certain components, exceeds current architectures in terms of accuracy, effectiveness, adjustability, temporal efficiency, and/or computational simplicity. This model succeeds in the adoption and enhancement of deprecated or obsolete algorithms across eight steps of image pre-processing—normalization, grayscaling, thresholding/binarization, noise removal, skew-correction, skeletonization/thinning, line separation, and character segmentation. The aforementioned model is evaluated with the use of a Convolutional Neural Network (CNN) leveraging the Balanced eMNIST dataset for training and testing. By suggesting contour-based feature extraction methods as alternatives to pixel-by-pixel iteration, the proposed paradigm demonstrates its capacity to serve as a suitable alternative to current, commonly used algorithms and computational techniques for textual image classification.