This study emphasizes the importance of facial expression recognition in identifying neurological problems in individuals with limited verbal communication abilities. Current evaluation models are time-consuming and expensive, hindering medical professionals. To address these limitations, we present an improved artificial neural network based on lyapunov stability theory (ANN-LST). By combining these models, we tackle convergence issues while encountering overfitting problems with high-dimensional data, thereby affecting prediction and analysis. Our approach employs principal component analysis (PCA) for feature reduction and extraction to effectively solve overfitting problems. The proposed model was evaluated using the Japanese female facial expression (JAFFE) database and the ahmad ilham simple face database (AIsFD) as our own database, with accuracy (ACC) as the evaluation metric. The results demonstrate higher recognition rates and faster training speeds owing to the adaptive learning rate parameters and the extraction of relevant feature information. The proposed system achieved a 13% higher success rate than face recognition systems using raw images alone. Overall, this model represents a significant advancement and offers promising applications for facial expression recognition in patients with neurological disorders.