Freshwater lakes supply a large amount of inland water resources to sustain local and regional developments. However, some lake systems depend upon great fluctuation in water surface area. Poyang lake, the largest freshwater lake in China, undergoes dramatic seasonal and interannual variations. Timely monitoring of Poyang lake surface provides essential information on variation of water occurrence for its ecosystem conservation. Application of histogram-based image segmentation in radar imagery has been widely used to detect water surface of lakes. Still, it is challenging to select the optimal threshold. Here, we analyze the advantages and disadvantages of a segmentation algorithm, the Otsu Method, from both mathematical and application perspectives. We implement the Otsu Method and provide reusable scripts to automatically select a threshold for surface water extraction using Sentinel-1 synthetic aperture radar (SAR) imagery on Google Earth Engine, a cloud-based platform that accelerates processing of Sentinel-1 data and auto-threshold computation. The optimal thresholds for each January from 2017 to 2020 are − 14 . 88 , − 16 . 93 , − 16 . 96 and − 16 . 87 respectively, and the overall accuracy achieves 92 % after rectification. Furthermore, our study contributes to the update of temporal and spatial variation of Poyang lake, confirming that its surface water area fluctuated annually and tended to shrink both in the center and boundary of the lake on each January from 2017 to 2020.
Natural disasters cause significant damage, casualties and economical losses. Twitter has been used to support prompt disaster response and management because people tend to communicate and spread information on public social media platforms during disaster events. To retrieve real-time situational awareness (SA) information from tweets, the most effective way to mine text is using natural language processing (NLP). Among the advanced NLP models, the supervised approach can classify tweets into different categories to gain insight and leverage useful SA information from social media data. However, high-performing supervised models require domain knowledge to specify categories and involve costly labelling tasks. This research proposes a guided latent Dirichlet allocation (LDA) workflow to investigate temporal latent topics from tweets during a recent disaster event, the 2020 Hurricane Laura. With integration of prior knowledge, a coherence model, LDA topics visualisation and validation from official reports, our guided approach reveals that most tweets contain several latent topics during the 10-day period of Hurricane Laura. This result indicates that state-of-the-art supervised models have not fully utilised tweet information because they only assign each tweet a single label. In contrast, our model can not only identify emerging topics during different disaster events but also provides multilabel references to the classification schema. In addition, our results can help to quickly identify and extract SA information to responders, stakeholders and the general public so that they can adopt timely responsive strategies and wisely allocate resource during Hurricane events.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.