Abstract:A cloud screening unit on a satellite platform for Earth observation can play an important role in optimizing communication resources by selecting images with interesting content while skipping those that are highly contaminated by clouds. In this study, we address the cloud screening problem by investigating an encoder–decoder convolutional neural network (CNN). CNNs usually employ millions of parameters to provide high accuracy; on the other hand, the satellite platform imposes hardware constraints on the pr… Show more
“…Our C-UNet matches the performance of our implementation of mobUNet [41], while using less memory thanks to the lack of skip connections. It also exceeds the performance of our implementation of StridedUNet [8], while being 20% smaller in terms of number of parameters. We see similar results on CloudPeru2 as on 38-Cloud.…”
Section: Experiments On Cloud Segmentationmentioning
confidence: 69%
“…Cloud segmentation using neural networks has already been integrated in an ARTSU CubeSat mission by Z. Zhang et al [41]. S. Ghassemi et al [8] have proposed a small strided U-Net architecture for onboard cloud segmentation. Their architectures are still too big for our use cases and are designed to work on 4-band images (RGB + NIR).…”
Section: Experiments On Cloud Segmentationmentioning
confidence: 99%
“…Their architectures are still too big for our use cases and are designed to work on 4-band images (RGB + NIR). Nevertheless, we include a 3-band implementation of MobUNet [41], MobDe-convNet [41] and "Plain+" strided U-Net [8] architectures in our comparison.…”
Section: Experiments On Cloud Segmentationmentioning
confidence: 99%
“…Zhang et al [41], are small versions of U-Net [31] and Deconv-Net [30] for onboard cloud segmentation, that use depth-wise separable convolutions [12], • StridedUNet, introduced by Ghassemi et al [8] for onboard cloud segmentation, which uses strided convolutions for downsampling, • LeNetFCN, an FCN variation of the well-known LeNet-5 architecture [17], which was originally created for written number classification. We replace the final Fully-Connected (Dense) layers by a 1x1 2D Convolution with a sigmoid activation, in order to output a segmentation map.…”
Section: Experiments On Cloud Segmentationmentioning
Semantic segmentation methods have made impressive progress with deep learning. However, while achieving higher and higher accuracy, state-of-the-art neural networks overlook the complexity of architectures, which typically feature dozens of millions of trainable parameters. Consequently, these networks requires high computational ressources and are mostly not suited to perform on edge devices with tight resource constraints, such as phones, drones, or satellites. In this work, we propose two highlycompact neural network architectures for semantic segmentation of images, which are up to 100 000 times less complex than state-of-the-art architectures while approaching their accuracy. To decrease the complexity of existing networks, our main ideas consist in exploiting lightweight encoders and decoders with depth-wise separable convolutions and decreasing memory usage with the removal of skip connections between encoder and decoder. Our architectures are designed to be implemented on a basic FPGA such as the one featured on the Intel Altera Cyclone V family of SoCs. We demonstrate the potential of our solutions in the case of binary segmentation of remote sensing images, in particular for extracting clouds and trees from RGB satellite images.
“…Our C-UNet matches the performance of our implementation of mobUNet [41], while using less memory thanks to the lack of skip connections. It also exceeds the performance of our implementation of StridedUNet [8], while being 20% smaller in terms of number of parameters. We see similar results on CloudPeru2 as on 38-Cloud.…”
Section: Experiments On Cloud Segmentationmentioning
confidence: 69%
“…Cloud segmentation using neural networks has already been integrated in an ARTSU CubeSat mission by Z. Zhang et al [41]. S. Ghassemi et al [8] have proposed a small strided U-Net architecture for onboard cloud segmentation. Their architectures are still too big for our use cases and are designed to work on 4-band images (RGB + NIR).…”
Section: Experiments On Cloud Segmentationmentioning
confidence: 99%
“…Their architectures are still too big for our use cases and are designed to work on 4-band images (RGB + NIR). Nevertheless, we include a 3-band implementation of MobUNet [41], MobDe-convNet [41] and "Plain+" strided U-Net [8] architectures in our comparison.…”
Section: Experiments On Cloud Segmentationmentioning
confidence: 99%
“…Zhang et al [41], are small versions of U-Net [31] and Deconv-Net [30] for onboard cloud segmentation, that use depth-wise separable convolutions [12], • StridedUNet, introduced by Ghassemi et al [8] for onboard cloud segmentation, which uses strided convolutions for downsampling, • LeNetFCN, an FCN variation of the well-known LeNet-5 architecture [17], which was originally created for written number classification. We replace the final Fully-Connected (Dense) layers by a 1x1 2D Convolution with a sigmoid activation, in order to output a segmentation map.…”
Section: Experiments On Cloud Segmentationmentioning
Semantic segmentation methods have made impressive progress with deep learning. However, while achieving higher and higher accuracy, state-of-the-art neural networks overlook the complexity of architectures, which typically feature dozens of millions of trainable parameters. Consequently, these networks requires high computational ressources and are mostly not suited to perform on edge devices with tight resource constraints, such as phones, drones, or satellites. In this work, we propose two highlycompact neural network architectures for semantic segmentation of images, which are up to 100 000 times less complex than state-of-the-art architectures while approaching their accuracy. To decrease the complexity of existing networks, our main ideas consist in exploiting lightweight encoders and decoders with depth-wise separable convolutions and decreasing memory usage with the removal of skip connections between encoder and decoder. Our architectures are designed to be implemented on a basic FPGA such as the one featured on the Intel Altera Cyclone V family of SoCs. We demonstrate the potential of our solutions in the case of binary segmentation of remote sensing images, in particular for extracting clouds and trees from RGB satellite images.
“…The most important role of clouds in climate is to regulate the Earth's radiation balance, and they also play an important role in short-term weather forecasting and long-term climate change. Accurate distinction between cloud pixels and clear sky pixels to obtain high-precision cloud mask products is a basic requirement for extracting ground surface features using remote sensing data (Ghassemi and Magli, 2019); it also provides reliable data support for atmospheric and environmental applications by detecting the changes and movements of clouds over the atmosphere. The determination of high-precision clear sky pixels and cloud pixels is an important data support to expand remote sensing applications; therefore, cloud detection is a necessary part of remote sensing quantitative applications.…”
The Zhuhai-1 hyperspectral satellite can simultaneously obtain spectral information in 32 spectral bands and effectively obtain accurate information on land features through integrated hyperspectral observations of the atmosphere and land, while the presence of clouds can contaminate remote sensing images. To improve the utilization rate of hyperspectral images, this study investigates the cloud detection method for hyperspectral satellite data based on the transfer learning technique, which can obtain a model with high generalization capability with a small training sample size. In this study, for the acquired Level-1B products, the top-of-atmosphere reflectance data of each band are obtained by using the calibration coefficients and spectral response functions of the product packages. Meanwhile, to eliminate the data redundancy between hyperspectral bands, the data are downscaled using the principal component transformation method, and the top three principal components are extracted as the sample input data for model training. Then, the pretrained VGG16 and ResNet50 weight files are used as the backbone network of the encoder, and the model is updated and trained again using Orbita hyperspectral satellite (OHS) sample data to fine-tune the feature extraction parameters. Finally, the cloud detection model is obtained. To verify the accuracy of the method, the multi-view OHS images are visually interpreted, and the cloud pixels are sketched out as the baseline data. The experimental results show that the overall accuracy of the cloud detection model based on the Resnet50 backbone network can reach 91%, which can accurately distinguish clouds from clear sky and achieve high-accuracy cloud detection in hyperspectral remote sensing images.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.