In contemporary society, the proliferation of online hateful messages has emerged as a pressing concern, inflicting deleterious consequences on both societal fabric and individual well-being. The automatic detection of such malevolent content online using models designed to recognize it, holds promise in mitigating its harmful impact. However, the advent of "Hateful Memes" poses fresh challenges to the detection paradigm, particularly within the realm of deep learning models. These memes, constituting of a textual element associated with an image are individually innocuous but their combination causes a detrimental effect. Consequently, entities responsible for disseminating information via web browsers are compelled to institute mechanisms that regulate and automatically filter out such injurious content. Effectively identifying hateful memes demands algorithms and models endowed with robust vision and language fusion capabilities, capable of reasoning across diverse modalities. This research introduces a novel approach by leveraging the multimodal Contrastive Language-Image Pre-Training (CLIP) model, fine-tuned through the incorporation of prompt engineering. This innovative methodology achieves a commendable accuracy of 87.42%. Comprehensive metrics such as loss, AUROC, and f1 score are also meticulously computed, corroborating the efficacy of the proposed strategy. Our findings suggest that this approach presents an efficient means to regulate the dissemination of hate speech in the form of viral meme content across social networking platforms, thereby contributing to a safer online environment.