2020
DOI: 10.3390/rs12162576
|View full text |Cite
|
Sign up to set email alerts
|

Performance Analysis of Deep Convolutional Autoencoders with Different Patch Sizes for Change Detection from Burnt Areas

Abstract: Fire is one of the primary sources of damages to natural environments globally. Estimates show that approximately 4 million km2 of land burns yearly. Studies have shown that such estimates often underestimate the real extent of burnt land, which highlights the need to find better, state-of-the-art methods to detect and classify these areas. This study aimed to analyze the use of deep convolutional Autoencoders in the classification of burnt areas, considering different sample patch sizes. A simple Autoencoder … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
14
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
7

Relationship

1
6

Authors

Journals

citations
Cited by 26 publications
(14 citation statements)
references
References 71 publications
0
14
0
Order By: Relevance
“…Bermudez et al [55] used a conditional Generative Adversarial Network to synthesize missing remote sensed optical data from Sentinel-1 SAR data for a region with the presence of burned area. Recently, de Bem et al [56] analyzed the performance of deep convolutional autoencoders (U-Net and ResUnet) using bitemporal image pair of the Landsat scenes and recommended the sampling window size of 256 by 256 pixels in DL model training.…”
Section: Related Studiesmentioning
confidence: 99%
See 1 more Smart Citation
“…Bermudez et al [55] used a conditional Generative Adversarial Network to synthesize missing remote sensed optical data from Sentinel-1 SAR data for a region with the presence of burned area. Recently, de Bem et al [56] analyzed the performance of deep convolutional autoencoders (U-Net and ResUnet) using bitemporal image pair of the Landsat scenes and recommended the sampling window size of 256 by 256 pixels in DL model training.…”
Section: Related Studiesmentioning
confidence: 99%
“…Moreover, the EMSR447, EMSR298_05, and EMSR298_03 products provide the reference data for the test sites in Corinthia, Fågelsjö-Lillåsen, and Trängslet, respectively. Differently, for the Elephant Hill and Enskogen fires, their dNBR images, calculated from cloud-free pre-fire and post-fire Sentinel-2 images, were empirically thresholded to elaborate the precise ground truth mask within the official perimeters from the Copernicus EMS (EMSR298_01) and BC Wildfire Service (K20637) [67] as de Bem et al [56] did. Furthermore, we manually refined all the burned area annotations based on visual analysis of VHR post-event optical images (i.e., Google Earth Map).…”
Section: Reference Datamentioning
confidence: 99%
“…DL enables pattern recognition in different data abstraction levels, varying from low-level information (corners and edges), up to high-level information (full objects) [4]. This approach achieves state-of-the-art results in different applications in remote sensing digital image processing [5]: pan-sharpening [6][7][8][9]; image registration [10][11][12][13], change detection [14][15][16][17], object detection [18][19][20][21], semantic segmentation [22][23][24][25], and time series analysis [26][27][28][29]. The classification algorithms applied in remote sensing imagery uses spatial, spectral, and temporal information to extract characteristics from the targets, where a wide variety of targets show significant results: clouds [30][31][32][33], dust-related air pollutant [34][35][36][37] land-cover/land-use [38][39][40][41], urban features [42][43][44][45], and ocean [46][47]…”
Section: Introductionmentioning
confidence: 99%
“…This is a data limitation for optical Earth observation sensors that are generally multispectral, where the available channels provide complementary information that maximizes accuracy. In semantic segmentation, approaches to aggregate more information considered: (a) the use of image fusion techniques, where the three bands used are data integration products [84]; (b) input layer adequation to support a larger amount of channels, e.g., 14 channels [15] 12 channels [14], 7 channels [85], and 4 channels [86].…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation