Research on real-time Facial Expression Recognition (FER) and human emotion prediction has been ongoing. The desire of humans to enable cohabitation between humans as well as machines can only be realized if a machine is able to comprehend, display, and respond to a human's emotional state. If an Artificial Intelligence (AI) makes judgements on the basis of emotions rather than just algorithms and logic, it will be able to really breathe life without air. Although there have been several attempts to date to do this, it is still believed to be in the early stages of development. Hence, this paper plans to perform the FER prediction with the help of novel intelligent deep learning technology. Initially, the data is gathered from two standard benchmark sources such as FER 2013 and EMOTIC dataset. Next, the pre-processing of the gathered images is accomplished by face detection, rotation rectification, and Local Binary Pattern (LBP). From the pre-processed images, the feature extraction is performed by the Centralized Binary Pattern (CBP) method. These extracted features undergo the final prediction phase that is done by the novel Enhanced Gated Recurrent Unit (EGRU), in which the parameters of GRU are tuned by the nature inspired optimization algorithm referred as Kookaburra Optimization Algorithm (KOA) with the consideration of error minimization as the major objective function. This novel EGRU-KOA predicts the final outcome with consideration of various facial expressions such as sad, surprise, disgust, contempt, happy, and fear respectively. The novelty of the proposed model is demonstrated by comparing it with various conventional models in terms of distinct analysis respectively. The proposed EGRU-KOA in terms of accuracy is 2.78%, 5.60%, 1.63%, and 4.28% better than DBN+QPSO, GDP, CNN, and WKELM respectively. Similarly, the proposed EGRU-KOA with respect to MSE is 84.82%, 87.41%, 88.51%, and 86.82% better than DBN+QPSO, GDP, CNN, and WKELM respectively.