Mitigating misinformation on social media is an unresolved challenge, particularly because of the complexity of information dissemination. To this end, Multivariate Hawkes Processes (MHP) have become a fundamental tool because they model social network dynamics, which facilitates execution and evaluation of mitigation policies. In this paper, we propose a novel light-weight intervention-based misinformation mitigation framework using decentralized Learning Automata (LA) to control the MHP. Each automaton is associated with a single user and learns to what degree that user should be involved in the mitigation strategy by interacting with a corresponding MHP, and performing a joint random walk over the state space. We use three Twitter datasets to evaluate our approach, one of them being a new COVID-19 dataset provided in this paper. Our approach shows fast convergence and increased valid information exposure. These results persisted independently of network structure, including networks with central nodes, where the latter could be the root of misinformation. Further, the LA obtained these results in a decentralized manner, facilitating distributed deployment in real-life scenarios.
Information trustworthiness assessment on political social media discussions is crucial to maintain the order of society, especially during emergent situations. The polarity nature of political topics and the echo chamber effect by social media platforms allow for a deceptive and a dividing environment. During a political crisis, a vast amount of information is being propagated on social media, that leads up to a high level of polarization and deception by the beneficial parties. The traditional approaches to tackling misinformation on social media usually lack a comprehensive problem definition due to its complication. This paper proposes a probabilistic graphical model as a theoretical view on the problem of normal users credibility on social media during a political crisis, where polarization and deception are keys properties. Such noisy signals dramatically influence any attempts for misinformation detection. Hence, we introduce a causal Bayesian network, inspired by the potential main entities that would be part of the process dynamics. Our methodology examines the problem solution in a causal manner which considers the task of misinformation detection as a question of cause and effect rather than just a classification task. Our causality-based approach provides a practical road map for some sub-problems in real-world scenarios such as individual polarization level, misinformation detection, and sensitivity analysis of the problem. Moreover, it facilitates intervention simulations which would unveil both positive and negative effects on the deception level over the network.
Tsetlin Machine (TM) is a logic-based machine learning approach with the crucial advantages of being transparent and hardware-friendly. While TMs match or surpass deep learning accuracy for an increasing number of applications, large clause pools tend to produce clauses with many literals (long clauses). As such, they become less interpretable. Further, longer clauses increase the switching activity of the clause logic in hardware, consuming more power. This paper introduces a novel variant of TM learning -- Clause Size Constrained TMs (CSC-TMs) -- where one can set a soft constraint on the clause size. As soon as a clause includes more literals than the constraint allows, it starts expelling literals. Accordingly, oversized clauses only appear transiently. To evaluate CSC-TM, we conduct classification, clustering, and regression experiments on tabular data, natural language text, images, and board games. Our results show that CSC-TM maintains accuracy with up to 80 times fewer literals. Indeed, the accuracy increases with shorter clauses for TREC and BBC Sports. After the accuracy peaks, it drops gracefully as the clause size approaches one literal. We finally analyze CSC-TM power consumption and derive new convergence properties.
Social Networks (SNs), such as Facebook, Twitter and LinkedIn, have become ubiquitous in our daily life. However, as the number of SN users grows there is higher demand for users' Quality of Experience (QoE). Some users may prefer to subscribe to a higher Quality of Service (QoS) level with their SN provider, e.g. to have higher priority on posting/retrieving, when for instance there are outages like the Twitter outage that happened during the Oscars 2014. In addition some users may wish to filter some posts, e.g. unwanted friendship requests. In this paper, we propose a novel architecture that enables differentiated QoS and information filtering in SNs to improve the users QoE. Our SN runs on top of 3GPP 4G Evolved Packet Core (EPC)-Based systems, and it uses EPC services to enable differentiated QoS. The components of our architecture interact through RESTful web services. Our architecture allows users to filter posts through their own criteria and have priority over other users in posting and/or retrieving.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.