No abstract
Natural disasters affect thousands of communities every year, leaving behind human losses, billions of dollars in rebuilding efforts, and psychological affectation in survivors. How fast a community recovers from a disaster or even how well a community can mitigate risk from disasters depends on how resilient that community is. One main factor that influences communities' resilience is how a community comes together in times of need. Social cohesion is considered to be"the glue that holds society together, which can be better examined in a critical situation. There is no consensus on measuring social cohesion, but recent literature indicates that social media communications and communities play an essential role in today's disaster mitigation strategies.This research explores how to quantify social cohesion through social media outlets during disasters. The approach involves combining and implementing text processing techniques and graph network analysis to understand the relationships between nine different types of participants during hurricanes Harvey, Irma, and Maria. Visualizations are employed to illustrate these connections, their evolution before, during, and after disasters, and the degree of social cohesion throughout their timeline. The proposed measurement of social cohesion through social media networks presented in this work can provide future risk management and disaster mitigation policies. This social cohesion measure identifies the types of actors in a social network and how this network varies daily. Therefore, decisionmakers could use this measure to release strategic communication before, during, and after a disaster strikes, thus providing relevant information to people in need.
Symbolic sequential data are produced in huge quantities in numerous contexts, such as text and speech data, biometrics, genomics, financial market indexes, music sheets, and online social media posts. In this paper, an unsupervised approach for the chunking of idiomatic units of sequential text data is presented. Text chunking refers to the task of splitting a string of textual information into non-overlapping groups of related units. This is a fundamental problem in numerous fields where understanding the relation between raw units of symbolic sequential data is relevant. Existing methods are based primarily on supervised and semi-supervised learning approaches; however, in this study, a novel unsupervised approach is proposed based on the existing concept of n-grams, which requires no labeled text as an input. The proposed methodology is applied to two natural language corpora: a Wall Street Journal corpus and a Twitter corpus. In both cases, the corpus length was increased gradually to measure the accuracy with a different number of unitary elements as inputs. Both corpora reveal improvements in accuracy proportional with increases in the number of tokens. For the Twitter corpus, the increase in accuracy follows a linear trend. The results show that the proposed methodology can achieve a higher accuracy with incremental usage. A future study will aim at designing an iterative system for the proposed methodology.
Disasters strike communities around the world, with a reduced time-frame for warning and action leaving behind high rates of damage, mortality, and years in rebuilding efforts. For the past decade, social media has indicated a positive role in communicating before, during, and after disasters. One important question that remained un-investigated is that whether social media efficiently connect affected individuals to disaster relief agencies, and if not, how AI models can use historical data from previous disasters to facilitate information exchange between the two groups. In this study, the BERT model is first fine-tuned using historical data and then it is used to classify the tweets associated with hurricanes Dorian and Harvey based on the type of information provided; and alongside, the network between users is constructed based on the retweets and replies on Twitter. Afterwards, some network metrics are used to measure the diffusion rate of each type of disaster-motivated information. The results show that the messages by disaster eyewitnesses get the least spread while the posts by governments and media have the highest diffusion rates through the network. Additionally, the “cautions and advice” messages get the most spread among other information types while “infrastructure and utilities” and “affected individuals” messages get the least diffusion even compared with “sympathy and support”. The analysis suggests that facilitating the propagation of information provided by affected individuals, using AI models, will be a valuable strategy to pursue in order to accelerate communication between affected individuals and survival groups during the disaster and aftermath.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.