With an increase in the number of active users on OSNs (Online Social Networks), the propagation of fake news became obvious. OSNs provide a platform for users to interact with others by expressing their opinions, resharing content into different networks, etc. In addition to these, interactions with posts are also collected, termed as social engagement patterns. By taking these social engagement patterns (by analyzing infectious disease spread analogy), SENAD(Social Engagement-based News Authenticity Detection) model is proposed, which detects the authenticity of news articles shared on Twitter based on the authenticity and bias of the users who are engaging with these articles. The proposed SENAD model incorporates the novel idea of authenticity score and factors in user social engagement centric measures such as Following-followers ratio, account age, bias, etc. The proposed model significantly improves fake news and fake account detection, as highlighted by classification accuracy of 93.7%. Images embedded with textual data catch more attention than textual messages and play a vital role in quickly propagating fake news. Images published have distinctive features which need special attention for detecting whether it is real or fake. Images get altered or misused to spread fake news. The framework Credibility Neural Network (CredNN) is proposed to assess the credibility of images on OSNs, by utilizing the spatial properties of CNNs to look for physical alterations in an image as well as analyze if the image reflects a negative sentiment since fake images often exhibit either one or both characteristics. The proposed hybrid idea of combining ELA and Sentiment analysis plays a prominent role in detecting fake images with an accuracy of around 76%.
Digital Mass Media has become the new paradigm of communication that revolves around online social networks. The increase in the utilization of online social networks (OSNs) as the primary source of information and the increase of online social platforms providing such news has increased the scope of spreading fake news. People spread fake news in multimedia formats like images, audio, and video. Visual-based news is prone to have a psychological impact on the users and is often misleading. Therefore, Multimodal frameworks for detecting fake posts have gained demand in recent times. This paper proposes a framework that flags fake posts with Visual data embedded with text. The proposed framework works on data derived from the Fakeddit dataset, with over 1 million samples containing text, image, metadata, and comments data gathered from a wide range of sources, and tries to exploit the unique features of fake and legitimate images. The proposed framework has different architectures to learn visual and linguistic models from the post individually. Image polarity datasets, derived from Flickr, are also considered for analysis, and the features extracted from these visual and text-based data helped in flagging news. The proposed fusion model has achieved an overall accuracy of 91.94%, Precision of 93.43%, Recall of 93.07%, and F1-score of 93%. The experimental results show that the proposed Multimodality model with Image and Text achieves better results than other state-of-art models working on a similar dataset.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.