Natural Language Processing (NLP) has emerged as a critical technology for understanding and generating human language, with applications including machine translation, sentiment analysis, and, most importantly, question classification. As a subfield of NLP, question classification focuses on determining the type of information being sought, which is an important step for downstream applications such as question answering systems. This study introduces an innovative ensemble approach to question classification that combines the strengths of the Electra, GloVe, and LSTM models. After being tried thoroughly on the well-known TREC dataset, the model shows that combining these different technologies can produce better outcomes. For understanding complex language, Electra uses transformers; GloVe uses global vector representations for wordlevel meaning; and LSTM models long-term relationships through sequence learning. Our ensemble model is a strong and effective way to solve the hard problem of question classification by mixing these parts in a smart way. The ensemble method works because it got an 80% accuracy score on the test dataset when it was compared to well-known models like BERT, RoBERTa, and DistilBERT.