The human face serves as a potent biological medium for expressing emotions, and the capability to interpret these expressions has been fundamental to human interaction since time immemorial. Consequently, the extraction of emotions from facial expressions in images, using machine learning, presents an intriguing yet challenging avenue. Over the past few years, advancements in artificial intelligence have significantly contributed to the field, replicating aspects of human intelligence. This paper proposes a Facial Emotion Recognition Convolutional Neural Network (FERCNN) model, addressing the limitations in accurately processing raw input images, as evidenced in the literature. A notable improvement in performance is observed when the input image is injected with noise prior to training and validation. Gaussian, Poisson, Speckle, and Salt & Pepper noise types are utilized in this noise injection process. The proposed model exhibits superior results compared to well-established CNN architectures, including VGG16, VGG19, Xception, and Resnet50. Not only does the proposed model demonstrate greater performance, but it also reduces training costs compared to models trained without noise injection at the input level. The FER2013 and JAFFE datasets, comprising seven different emotions (happy, angry, neutral, fear, disgust, sad, and surprise) and totaling 39,387 images, are used for training and testing. All experimental procedures are conducted via the Kaggle cloud infrastructure. When Gaussian, Poisson, and Speckle noise are introduced at the input level, the suggested CNN model yields evaluation accuracies of 92.17%, 95.07%, and 92.41%, respectively. In contrast, the highest accuracies achieved by existing models such as VGG16, VGG19, and Resnet 50 are 45.97%, 63.97%, and 54.52%, respectively.