2019
DOI: 10.1016/j.rse.2019.111347
|View full text |Cite
|
Sign up to set email alerts
|

Country-wide high-resolution vegetation height mapping with Sentinel-2

Abstract: Sentinel-2 multi-spectral images collected over periods of several months were used to estimate vegetation height for Gabon and Switzerland. A deep convolutional neural network (CNN) was trained to extract suitable spectral and textural features from reflectance images and to regress per-pixel vegetation height. In Gabon, reference heights for training and validation were derived from airborne LiDAR measurements. In Switzerland, reference heights were taken from an existing canopy height model derived via phot… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

7
83
0
1

Year Published

2019
2019
2024
2024

Publication Types

Select...
4
3

Relationship

0
7

Authors

Journals

citations
Cited by 145 publications
(131 citation statements)
references
References 48 publications
7
83
0
1
Order By: Relevance
“…In addition to the high spatial resolution, Sentinel-2 provides a new image up to a frequency of 5 days. In a similar context is the representative example of [6], where Sentinel-2 multispectral images were used to regress per-pixel vegetation height for Gabon and Switzerland using a deep CNN. Suitable spectral and textural features that were applied resulted in a mean absolute error (MAE) of 1.7 m in Switzerland and 4.3 m in Gabon, correctly estimating heights up to 50 m.…”
Section: Discussionmentioning
confidence: 99%
See 2 more Smart Citations
“…In addition to the high spatial resolution, Sentinel-2 provides a new image up to a frequency of 5 days. In a similar context is the representative example of [6], where Sentinel-2 multispectral images were used to regress per-pixel vegetation height for Gabon and Switzerland using a deep CNN. Suitable spectral and textural features that were applied resulted in a mean absolute error (MAE) of 1.7 m in Switzerland and 4.3 m in Gabon, correctly estimating heights up to 50 m.…”
Section: Discussionmentioning
confidence: 99%
“…Confusion matrices (see Tables 5 and 6) correspond to the highest accuracies acquired for both the six (6) and the four (4) height classes, respectively. High values of producer's (89.63% and 88.15% PA) and user's (77.73% and 76.13% UA) accuracy are observed for the class [5,40) or FPH, reflecting low values of omission and commission errors, respectively; the majority of [5,40) classes were classified correctly, as can be seen.…”
Section: Phase 1: Estimation Of Canopy Height Using Ultra-high Resolumentioning
confidence: 98%
See 1 more Smart Citation
“…Additionally, there are two drop-out layers to avoid model overfitting to the training data, given that the goal is to map HSE globally. No additional pooling layers are used to avoid the information loss during downsampling process, which is also the design idea in [34] and [52]. As defined, the output prediction is with a 20-meter GSD, while the input data is with a 10-meter GSD; thus no upsampling layers are used.…”
Section: Architecture and Training Of Sen2hse-netmentioning
confidence: 99%
“…The fundamental advantage of all these deep neural networks is their ability for enhanced feature representation and pixel-level recognition. Examples where convolutional neural networks (CNN) and, in particular, FCNs are used for remote sensing image classification or segmentation include [24,25,26,27,28,29,30,31,32,33,34,35]. Apart from the works focusing on very high resolution satellite or aerial imagery (i.e., with a ground sampling distance equal to or even less than 1 m), data of lower spatial resolution is also being studied, since the images of lower resolutions such as globally openly available Sentinel-2 imagery remain the key candidates for large-scale mapping [36,37].…”
mentioning
confidence: 99%