Disaster resilience is the capacity of a community to “bounce back” from disastrous events. Most studies rely on traditional data such as census data to study community resilience. With increasing use of social media, new data sources such as Twitter could be utilized to monitor human response during different phases of disasters to better understand resilience. An important research question is: Does Twitter use correlate with disaster resilience? Specifically, will communities with more disaster-related Twitter uses be more resilient to disasters, presumably because they have better situational awareness? The underlying issue is that if there are social and geographical disparities in Twitter use, how will such disparities affect communities’ resilience to disasters? This study examines the relationship between Twitter use and community resilience during Hurricane Isaac, which hit Louisiana and Mississippi in August 2012. First, we applied the resilience inference measurement (RIM) model to calculate the resilience indices of 146 affected counties. Second, we analyzed Twitter use and their sentiment patterns through the three phases of Hurricane Isaac—preparedness, response, and recovery. Third, we correlated Twitter use density and sentiment scores with the resilience scores and major social–environmental variables to test whether significant geographical and social disparities in Twitter use existed through the three phases of disaster management. Significant positive correlations were found between Twitter use density and resilience indicators, confirming that communities with higher resilience capacity, which are characterized by better social–environmental conditions, tend to have higher Twitter use. These results imply that Twitter use during disasters could be improved to increase the resilience of affected communities. On the other hand, no significant correlations were found between sentiment scores and resilience indicators, suggesting that further research on sentiment analysis may be needed.
The abundance of available information on social networks can provide invaluable insights into people's responses to health information and public health guidance concerning COVID-19. This study examines tweeting patterns and public engagement on Twitter, as forms of social networks, related to public health messaging in two U.S. states (Washington and Louisiana) during the early stage of the pandemic. We analyze more than 7M tweets and 571K COVID-19-related tweets posted by users in the two states over the first 25 days of the pandemic in the U.S. (Feb. 23, 2020, to Mar. 18, 2020. We also qualitatively code and examine 460 tweets posted by selected governmental official accounts during the same period for public engagement analysis. We use various methods for analyzing the data, including statistical analysis, sentiment analysis, and word usage metrics, to find inter-and intra-state disparities of tweeting patterns and public engagement with health messaging. Our findings reveal that users in Washington were more active on Twitter than users in Louisiana in terms of the total number and density of COVID-19-related tweets during the early stage of the pandemic. Our correlation analysis results for counties or parishes show that the Twitter activities (tweet density, COVID-19 tweet density, and user density) were positively correlated with population density in both states at the 0.01 level of significance. Our sentiment analysis results demonstrate that the average daily sentiment scores of all and COVID-19-related tweets in Washington were consistently higher than those in Louisiana during this period. While the daily average sentiment scores of COVID-19-related tweets were in the negative range, the scores of all tweets were in the positive range in both states. Lastly, our analysis of governmental Twitter accounts found that these accounts' messages were most commonly meant to spread information about the pandemic, but that users were most likely to engage with tweets that requested readers take action, such as hand washing.INDEX TERMS COVID-19, geospatial data analysis, natural language processing, public engagement, public health messaging, sentiment analysis, statistical analysis, and Twitter data analytics.
AI fairness is tasked with evaluating and mitigating bias in algorithms that may discriminate towards protected groups. This paper examines if bias exists in AI algorithms used in disaster management and in what manner. We consider the 2017 Hurricane Harvey when flood victims in Houston resorted to social media to request for rescue. We evaluate a Random Forest regression model trained to predict Twitter rescue request rates from social-environmental data using three fairness criteria (independence, separation, and sufficiency). The Social Vulnerability Index (SVI), its four sub-indices, and four variables representing digital divide were considered sensitive attributes. The Random Forest regression model extracted seven significant predictors of rescue request rates, and from high to low importance they were percent of renter occupied housing units, percent of roads in flood zone, percent of flood zone area, percent of wetland cover, percent of herbaceous, forested and shrub cover, mean elevation, and percent of households with no computer or device. Partial Dependence plots of rescue request rates against each of the seven predictors show the non-linear nature of their relationships. Results of the fairness evaluation of the Random Forest model using the three criteria show no obvious biases for the nine sensitive attributes, except that a minor imperfect sufficiency was found with the SVI Housing and Transportation sub-index. Future AI modeling in disaster research could apply the same methodology used in this paper to evaluate fairness and help reduce unfair resource allocation and other social and geographical disparities.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.