Accurate tropical cyclone (TC) intensity estimation is crucial for prediction and disaster prevention. Currently, significant progress has been achieved for the application of convolutional neural networks (CNNs) in TC intensity estimation. However, many studies have overlooked the fact that the local convolution used by CNNs does not consider the global spatial relationships between pixels. Hence, they can only capture limited spatial contextual information. In addition, the special rotation invariance and symmetry structure of TC cannot be fully expressed by convolutional kernels alone. Therefore, this study proposes a new deep learning-based model for TC intensity estimation, which uses a combination of rotation equivariant convolution and Transformer to address the rotation invariance and symmetry structure of TC. Combining the two can allow capturing both local and global spatial contextual information, thereby achieving more accurate intensity estimation. Furthermore, we fused multi-platform satellite remote sensing data into the model to provide more information about the TC structure. At the same time, we integrate the physical environmental field information into the model, which can help capture the impact of these external factors on TC intensity and further improve the estimation accuracy. Finally, we use TCs from 2003 to 2015 to train our model and use 2016 and 2017 data as independent validation sets to verify our model. The overall root mean square error (RMSE) is 8.19 kt. For a subset of 482 samples (from the East Pacific and Atlantic) observed by aircraft reconnaissance, the root mean square error is 7.88 kt.
Accurate estimation of tropical cyclone (TC) intensity helps to understand the evolution of TCs throughout their life cycle and plays an essential role in mitigating TC impact. Although TC intensity estimation methods based on deep learning have made significant progress, the existing techniques do not apply good methods to overcome the intensity overestimation and underestimation problems caused by the unbalanced distribution of TC data. Therefore, we propose a dynamic balance convolutional neural network to overcome these issues. The model consists of two branches, one branch is the learning of the raw data, and the other is the learning of strong (weak) TCs that account for a few data samples. Finally, the model is dynamically adjusted by adaptive trade-off parameters, gradually from the learning of the raw data to the learning of strong (weak) TCs, thus reducing errors in underestimation (overestimation) of strong (weak) TCs. Furthermore, an attention mechanism is employed to obtain the correlation between channels to improve the accuracy of TC intensity estimation further. We used globally 1285 TC cases from 2003-2016 to train the model and globally 94 TC cases from 2017 as independent test data. The results showed that the root-mean-square error of TC intensity estimation was 8.32 kt, 35% lower than that of the advanced Dvorak technique and 26% lower than that of the deep learning method visual geometry group (VGG). For a subset of 482 samples (from East Pacific and Atlantic) analyzed with reconnaissance observations, a root-mean-square intensity difference of 7.95 kt is achieved. Finally, we explored the model's feature learning process and the contribution of each component of the satellite image to the TC intensity estimation through model visualization.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.