In contemporary times, the proliferation of information technology has prompted an increased interest among scholars towards artificial intelligence, particularly its constituent algorithms, such as deep learning, multilayer perceptron and convolutional neural networks. Of specific interest is the analysis of facial expressions, which has emerged as a popular research topic. However, classifying facial expressions presents a significant challenge due to variations in expressions associated with different emotions, as well as similarities between the various emotions. The task of expression classification is further complicated by the abundance of facial features that must be considered. In this paper, the basic Convolutional Neural Network (CNN) model was first trained on the FRE-2013 face expression dataset and optimized by adding an attention module to learn and analyse the key features of faces, thereby improving the classification accuracy of the model. In short, this model achieved an accuracy of 71.65% and an F1-Score of 0.66 while the model without the attention module only achieved a model accuracy of 69.86% and an F1-Score of 0.65, an improvement of about 2%. The analysis demonstrated that by adding the attention mechanism, some important features e.g. the eyes and mouth, are given more weight, thus improving the classification accuracy.