For many intelligent applications, object recognition is a critical issue. Developing an effective feature extraction method is one of the most difficult issues in object recognition. For this process, a variety of algorithms were developed, including self-organizing maps (SOMs), support vector machines (SVM), principal component analysis (PCA), and modern deep learning techniques, particularly convolutional neural networks (CNNs). In CNNs, the two layers that most significantly contribute to the network bottleneck are the convolution layer and the fully connected layer. While the later layer is memory-intensive, the convolution one is computationally expensive. So, optimizing these two layers is crucial for executing efficient convolution operations. The primary objective of this paper is a detailed discussion of two approaches for optimizing the CNNs architecture. In the first approach, CNNs were enhanced using the self-organizing maps (SOMs) topology space in the convolution layer and the KNN classifier instead of the conventional fully connected layer. The second approach employed the KNN classifier in the fully connected layer and used an improved SOMs technique called "cyclic convolution SOMs" rather than convolution structures to process CNNs more quickly. The efficiency of the proposed approaches has been evaluated on four wide benchmark datasets: AHDBase for Arabic digits, MNIST for English digits, CMU-PIE for faces, and CIFAR-10 for objects. The experiment using these datasets provided the following findings in comparison to other approaches (e.g., standard CNN, CSOMs, LSTMs, SVM, SOMs, PCA, and cyclic SOM): the first approach produced results of 97.7%, 98.2%, 98.51%, and 93.8%; the second approach produced results of 96.57%, 95.4%, 97%, and 89.23%. Our results indicate that when applied to a variety of datasets, the proposed methods offer promising outcomes with higher accuracy than the existing ones.