The traditional FCM algorithm is developed on the basis of classical fuzzy theory, though the classical fuzzy theory has its own limitations. The lack of expressive ability of uncertain information makes it hard for FCM algorithm to handle clustered boundary pixels and outliers. This paper proposes a Neutrosophic C-means Clustering with local information and noise distance-based kernel metric for image segmentation (NKWNLICM). At first, noisy distance and fuzzy spatial information are introduced to NCM model to improve the robustness of noise image segmentation. Then, the kernel function is used to measure the distance between pixels. By mapping low-dimensional data into high-dimensional data, the classification performance is further improved. At last, the fuzzy factor is redefined based on the distance between the center pixel and its neighborhood. The new fuzzy factor can excellently reflect the influence of neighborhood pixels on central pixels and improve the classification accuracy much better. The experimental results on Berkeley Segmentation Database demonstrates the excellent performance of the proposed method for noisy image segmentation.
The traditional FCM algorithm is developed on the basis of classical fuzzy theory, though the classical fuzzy theory has its own limitations. The lack of expressive ability of uncertain information makes it hard for FCM algorithm to handle clustered boundary pixels and outliers. This paper proposes a Neutrosophic C-means Clustering with local information and noise distance-based kernel metric for image segmentation (NKWNLICM). At first, noisy distance and fuzzy spatial information are introduced to NCM model to improve the robustness of noise image segmentation. Then, the kernel function is used to measure the distance between pixels. By mapping low-dimensional data into high-dimensional data, the classification performance is further improved. At last, the fuzzy factor is redefined based on the distance between the center pixel and its neighborhood. The new fuzzy factor can excellently reflect the influence of neighborhood pixels on central pixels and improve the classification accuracy much better. The experimental results on Berkeley Segmentation Database demonstrates the excellent performance of the proposed method for noisy image segmentation.
Short-term precipitation prediction through abundant observation data (ground observation station data, radar data, etc.) is an essential part of the contemporary meteorological prediction system. However, most current studies only use single-modal data, which leads to some problems, such as poor prediction accuracy and little prediction timeliness. This paper proposes a multimodal data fusion precipitation prediction model integrating station data and radar data. Specifically, our model consists of three parts. Firstly, the radar feature encoder comprises a shallow convolution neural network and a stacked convolutional long short term memory network (ConvLSTM), which is used to extract the spatio-temporal features of radar-echo data. The weather station data feature encoder is composed of a fully connected network and an LSTM, which is used to extract the sequential features of the weather station data. Then, the cross-modal feature encoder obtains cross-modal features by aligning and exchanging the feature information of the radar data and the weather station data through the cross-attention mechanism. Finally, the decoder outputs the quantitative short-term precipitation prediction value. Our model can integrate station and radar data characteristics and improve prediction accuracy and timeliness, and can flexibly add other modal features. We have verified our model on four short-term and impending rainfall datasets in South Eastern China, achieving the best performance among the algorithms.
Accurate precipitation prediction can help decision makers judge the trend of climate change and formulate more effective measures, and prevent flood and drought disasters. In this paper, we propose a short‐term regional precipitation prediction model based on wind‐improved spatiotemporal convolutional network. Among them, the improved graph convolution network integrates the effects of wind direction and geographic location at past moments to capture the spatial dependence, whilst the gated recurrent unit captures the temporal dependence by learning the dynamic changes of data. The spatio‐temporal memory flow module and attention module are added to capture spatial deformation and temporal variation more accurately, thereby better matching the physical properties of precipitation. The proposed model achieves better prediction results on real data sets. Experiments show that our method is better at extracting the spatio‐temporal information of precipitation data and capturing its time dependence and spatial correlation.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.