The combat against fake news and disinformation is an ongoing, multi-faceted task for researchers in social media and social networks domains, which comprises not only the detection of false facts in published content but also the detection of accountability mechanisms that keep a record of the trustfulness of sources that generate news and, lately, of the networks that deliberately distribute fake information. In the direction of detecting and handling organized disinformation networks, major social media and social networking sites are currently developing strategies and mechanisms to block such attempts. The role of machine learning techniques, especially neural networks, is crucial in this task. The current work focuses on the popular and promising graph representation techniques and performs a survey of the works that employ Graph Convolutional Networks (GCNs) to the task of detecting fake news, fake accounts and rumors that spread in social networks. It also highlights the available benchmark datasets employed in current research for validating the performance of the proposed methods. This work is a comprehensive survey of the use of GCNs in the combat against fake news and aims to be an ideal starting point for future researchers in the field.
As the amount of content that is created on social media is constantly increasing, more and more opinions and sentiments are expressed by people in various subjects. In this respect, sentiment analysis and opinion mining techniques can be valuable for the automatic analysis of huge textual corpora (comments, reviews, tweets etc.). Despite the advances in text mining algorithms, deep learning techniques, and text representation models, the results in such tasks are very good for only a few high-density languages (e.g., English) that possess large training corpora and rich linguistic resources; nevertheless, there is still room for improvement for the other lower-density languages as well. In this direction, the current work employs various language models for representing social media texts and text classifiers in the Greek language, for detecting the polarity of opinions expressed on social media. The experimental results on a related dataset collected by the authors of the current work are promising, since various classifiers based on the language models (naive bayesian, random forests, support vector machines, logistic regression, deep feed-forward neural networks) outperform those of word or sentence-based embeddings (word2vec, GloVe), achieving a classification accuracy of more than 80%. Additionally, a new language model for Greek social media has also been trained on the aforementioned dataset, proving that language models based on domain specific corpora can improve the performance of generic language models by a margin of 2%. Finally, the resulting models are made freely available to the research community.
The task of sentiment analysis tries to predict the affective state of a document by examining its content and metadata through the application of machine learning techniques. Recent advances in the field consider sentiment to be a multi-dimensional quantity that pertains to different interpretations (or aspects), rather than a single one. Based on earlier research, the current work examines the said task in the framework of a larger architecture that crawls documents from various online sources. Subsequently, the collected data are pre-processed, in order to extract useful features that assist the machine learning algorithms in the sentiment analysis task. More specifically, the words that comprise each text are mapped to a neural embedding space and are provided to a hybrid, bi-directional long short-term memory network, coupled with convolutional layers and an attention mechanism that outputs the final textual features. Additionally, a number of document metadata are extracted, including the number of a document’s repetitions in the collected corpus (i.e. number of reposts/retweets), the frequency and type of emoji ideograms and the presence of keywords, either extracted automatically or assigned manually, in the form of hashtags. The novelty of the proposed approach lies in the semantic annotation of the retrieved keywords, since an ontology-based knowledge management system is queried, with the purpose of retrieving the classes the aforementioned keywords belong to. Finally, all features are provided to a fully connected, multi-layered, feed-forward artificial neural network that performs the analysis task. The overall architecture is compared, on a manually collected corpus of documents, with two other state-of-the-art approaches, achieving optimal results in identifying negative sentiment, which is of particular interest to certain parties (like for example, companies) that are interested in measuring their online reputation.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.