Conversation among people is a profuse form of interaction that also carries emotional information. Speech input has been the subject of numerous studies over the last ten years, and it is now crucial for human-computer connection, as well as for medical care, privacy, and stimulation. This research aims to evaluate if the suggested framework can aid in speech emotion recognition (SER) activities and determine if Convolutional Neural Network (CNN) systems are efficient for SER activities using transfer learning models on spectrogram. In this investigation, the authors present a brand-new attention-based CNN framework and evaluate its efficacy against several wellknown CNN architectures from earlier research. The effectiveness of the suggested system is assessed using the SAVEE dataset, an open-access resource for emotive speech, compared to famous CNN models like VGG16, InceptionV3, ResNet50, InceptionResNetV2, and Xception. The authors used stacked 10fold cross-validation on SAVEE for all of our trials. Amongst these CNN structures, the suggested model had the greatest accuracy (87.14%), followed by VGG16 (83.19%) and InceptionResNetV2 (82.22%). Compared to contemporary techniques, the test results and evaluation show our proposed approach to have steady and impressive results.