2021
DOI: 10.1109/jstars.2021.3070786
|View full text |Cite
|
Sign up to set email alerts
|

Cloud and Cloud Shadow Segmentation for Remote Sensing Imagery Via Filtered Jaccard Loss Function and Parametric Augmentation

Abstract: Cloud and cloud shadow segmentation are fundamental processes in optical remote sensing image analysis. Current methods for cloud/shadow identification in geospatial imagery are not as accurate as they should, especially in the presence of snow and haze. This paper presents a deep learningbased framework for the detection of cloud/shadow in Landsat 8 images. Our method benefits from a convolutional neural network, Cloud-Net+ (a modification of our previously proposed Cloud-Net [1]) that is trained with a novel… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
12
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
7
1

Relationship

0
8

Authors

Journals

citations
Cited by 36 publications
(17 citation statements)
references
References 71 publications
0
12
0
Order By: Relevance
“…We compared the proposed algorithm ClouDet, with the state-of-the-art methods, including FCN [27], deeplabv3+ [41], Cloud-Net+ [21], BiSeNetV1 [46] and Fmask [6]. We tested ClouDet and other methods mentioned above on the test dataset for comparation.…”
Section: Baseline Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…We compared the proposed algorithm ClouDet, with the state-of-the-art methods, including FCN [27], deeplabv3+ [41], Cloud-Net+ [21], BiSeNetV1 [46] and Fmask [6]. We tested ClouDet and other methods mentioned above on the test dataset for comparation.…”
Section: Baseline Methodsmentioning
confidence: 99%
“…Mohajerani and Saeedi [20] trained a fully convolutional network with both local and global features from the entire scene for end-to-end pixel-level labeling of the satellite images. And to identify the cloud regions in aerial or satellite images accurately in the presence of snow and haze, an improvement version has been developed with filtered Jaccard loss in [21]. Chen et al [22] applied an adaptive simple linear iterative clustering method to obtain high-quality superpixels and detect clouds by extracting multiscale features from each superpixel.…”
Section: Introductionmentioning
confidence: 99%
“…Fitoka et al [114] developed a segmentation model to use remote sensing imagery to map global wetland ecosystems for water resource management and their interactions with other earth system components. Mohajerani and Saeed [115] used image segmentation to detect and remove clouds and cloud shadows from images to reduce error in biophysical and atmospheric analyses.…”
Section: Semantic Segmentationmentioning
confidence: 99%
“…According to our statistics, the daily peak value of the data obtained by the GF-5 land observation payloads is above 300 scenes. For data providers, it is crucial Sorour Mohajerani et al [39] 2020 Pixel Cloud-Net+ [39] U-shape + Branches Zhiwei Li et al [31] 2019 Pixel MSCFF [31] U-shape + Branches Jingyu Yang et al [40] 2019 Pixel CDnet [40] Multi-scale Jacob Hobroe Jeppesen et al [41] 2019 Pixel RS-Net [41] U-shape Dengfeng Chai et al [38] 2019 Pixel Modified SegNet [38] U-shape Zhengfeng Shao et al [25] 2019 Pixel MF-CNN [25] Multi-branch Yongjie Zhan et al [22] 2019 Pixel FCN [22] Linear stack + Branches Johannes Dronner et al [42] 2018 Pixel CS-CNN [42] U-shape Han Liu et al [43] 2018 Object SLIC+HFCNN+ Deep Forest [43] Linear stack Giorgio Morales et al [44] 2018 Object ASLIC+CNN [44] Linear stack…”
Section: Introductionmentioning
confidence: 99%
“…As such, these architectures, with a single processing pipeline that relies on multistage cascaded CNNs, may lead to the loss of spatial information and may result in inaccurate boundary definitions [52][53][54]. Some meaningful practices relating to the fusion of features at different depths and scales to expand the receptive field of the network have also been reported [25,31,39,40]. However, further research is needed, especially on ways to reduce the loss of spatial information and how to capture and fuse the relevant and meaningful multi-scale contextual information instead of simple concatenation.…”
Section: Introductionmentioning
confidence: 99%