Online social media allows users to connect with a large number of people across the globe and facilitate the exchange of information efficiently. These platforms cater to many of our day-to-day needs. However, at the same time, social media have been increasingly used to transmit negative stances like derogatory language, hate speech, and cyberbullying. The task of identifying the negative stances from social media posts or comments, or tweets is termed negative stance detection. One of the major challenges associated with negative stance detection is that most of the content published on social media is often in a multi-lingual format. This work aims to identify negative stances from multilingual data streams in low-resource languages on social media using a hybrid transfer learning and deep convolutional neural network approach. The proposed work strats by preprocessing the multi-lingual datasets by removing irrelevant information like special characters, hyperlinks, etc. The processed dataset is then passed through a pre-trained BERT (bidirectional encoder representations from transformers) model to generate embeddings by fine-tuning the model as per the dataset under consideration. The generated word embeddings are then passed to a deep convolutional neural network for extracting the latent features from the texts and removing the unessential information. This helps our model to achieve robustness and effectiveness for efficient learning on the given dataset and make appropriate predictions on zero-shot data. The paper utilizes several optimization strategies for examining the impact of fine-tuning different BERT layers on the model’s performance. ntensive experiments on a variety of languages, namely, English, French, Italian, Danish, Arabic, Spanish, Indonesian, German, and Portuguese are performed. The experimental results demonstrate the effectiveness and efficiency of the proposed framework.