Adversarial attacks present a formidable challenge to the integrity of Convolutional Neural Network-Long Short-Term Memory (CNN-LSTM) models, particularly in the domain of power quality disturbance (PQD) classification, necessitating the development of effective defense mechanisms. These attacks, characterized by their subtlety, can significantly degrade the performance of models critical for maintaining power system stability and efficiency. This study introduces the concept of adversarial attacks on CNN-LSTM models and emphasizes the critical need for robust defenses.We propose Input Adversarial Training (IAT) as a novel defense strategy aimed at enhancing the resilience of CNN-LSTM models. IAT involves training models on a blend of clean and adversarially perturbed inputs, intending to improve their robustness. The effectiveness of IAT is assessed through a series of comparisons with established defense mechanisms, employing metrics such as accuracy, precision, recall, and F1-score on both unperturbed and adversarially modified datasets.The results are compelling: models defended with IAT exhibit remarkable improvements in robustness against adversarial attacks. Specifically, IAT-enhanced models demonstrated an increase in accuracy on adversarially perturbed data to 85%, a precision improvement to 86%, a recall rise to 85%, and an F1score enhancement to 85.5%. These figures significantly surpass those achieved by models utilizing standard adversarial training (75% accuracy) and defensive distillation (70% accuracy), showcasing IAT's superior capacity to maintain model accuracy under adversarial conditions.In conclusion, IAT stands out as an effective defense mechanism, significantly bolstering the resilience of CNN-LSTM models against adversarial perturbations. This research not only sheds light on the vulnerabilities of these models to adversarial attacks but also establishes IAT as a benchmark in defense strategy development, promising enhanced security and reliability for PQD classification and related applications.