As a digital environment introduced for establishing and enhancing human communication through different social networks and channels, social media continued to develop and spread at an incredible rate making it difficult to find or imagine a concept, technology, or business that does not have or plan to have its social media representation and space. Concurrently, social media became a playground and even a battlefield where different ideas carrying out diverse validity degrees are spread for reaching their target audiences generated by clear and trustable well-known, uncertain, or even evil aimed entities. In the stride carried out for preventing, containing, and limiting the effects of social manipulation of the last two types of entities, proper/effective security awareness is critical and mandatory in the first place. On this behalf, several strategies, policies, methods, and technologies were proposed by research and practitioner communities, but such initiatives take mostly a defender perspective, and this is not enough in cyberspace where the offender is in advantage in attack. Therefore, this research aims to produce social media manipulation security awareness taking the offender stance by generating and analysing disinformation tweets using deep learning. To reach this goal, a Design Science Research methodology is followed in a Data Science approach, and the results obtained are analysed and positioned in the ongoing discourses showing the effectiveness of such approach and its role in building future social media manipulation detection solutions. This research also intends to contribute to the design of further transparent and responsible modelling and gaming solutions for building/enhancing social manipulation awareness and the definition of realistic cyber/information operations scenarios dedicated/engaging large multi-domain (non)expert audiences.
In recent years, social media remained not only an environment for expression, but it also became an active battlefield where its users emulate their thoughts, feelings, beliefs, and experiences about the ongoing conflicts and wars, and where sometimes social manipulation techniques become difficult to identify, tackle, and counter. Currently, proper social media analytics solutions are still incipient regarding the ongoing war in Ukraine, but they are much needed. Consequently, this research aims to extract and analyze the topics discussed and the sentiments experienced by Ukrainian Telegram users on data collected in the first 2 months of war in Ukraine through a design science research in a data science methodological approach carried out in a multidisciplinary stance contributing to the ongoing strategic, socio‐ethical, and technical discourses and efforts.
Through technological advancements as well as due to societal trends and developments, social media became an active part and a catalysator of the ongoing conflicts and wars carried out in the physical environment. A direct example on this behalf are the cyber/information operations currently conducted in conjunction with the ongoing Russian-Ukrainian war. Due to such operations packaged in social media manipulation mechanisms like disinformation and misinformation using techniques such as controversies, fake news, and deep fakes, a high degree of confusion and uncertainty surrounds the events happened and users’ behaviour and beliefs. These operations also impact the civilians directly affected in the battlefield or their dear and known ones. To tackle this issue, currently limited scientific and objective effort is dedicated in this direction due to, e.g., data, strategic, and emotional implications. It is then the aim of this research to capture the main topics discussed and the feeling expressed by Ukrainian Telegram users on the ongoing Russian-Ukrainian war in 2022 using a Data Science approach by building a series of Machine Learning models based on multi-channel data collected in the first six of months of war. Accordingly, this research directly aims to contribute to efforts on understanding real discourses and dynamics involved in the ongoing conflict through direct resources, producing and sustaining social media security awareness, and building resilience to social media manipulation campaigns using AI.
In its essence, social media is on its way of representing the superposition of all digital representations of human concepts, ideas, believes, attitudes, and experiences. In this realm, the information is not only shared, but also {mis, dis}interpreted either unintentionally or intentionally guided by (some kind of) awareness, uncertainty, or offensive purposes. This can produce implications and consequences such as societal and political polarization, and influence or alter human behaviour and beliefs. To tackle these issues corresponding to social media manipulation mechanisms like disinformation and misinformation, a diverse palette of efforts represented by governmental and social media platforms strategies, policies, and methods plus academic and independent studies and solutions are proposed. However, such solutions are based on a technical standpoint mainly on gaming or AI-based techniques and technologies, but often only consider the defender’s perspective and address in a limited way the social perspective of this phenomenon becoming single angled. To address these issues, this research combines the defenders’ perspective with the one of the offenders by (i) building a hybrid deep learning disinformation generation and detection model and (ii) capturing and proposing a set of design recommendations that could be considered when establishing patterns, requirements, and features for building future gaming and AI-based solutions for combating social media manipulation mechanisms. This is done using the Design Science Research methodology in Data Science approach aiming at enhancing security awareness and resilience against social media manipulation.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.