2021
DOI: 10.3390/rs13204036
|View full text |Cite
|
Sign up to set email alerts
|

Interoperability Study of Data Preprocessing for Deep Learning and High-Resolution Aerial Photographs for Forest and Vegetation Type Identification

Abstract: When original aerial photographs are combined with deep learning to classify forest vegetation cover, these photographs are often hindered by the interlaced composition of complex backgrounds and vegetation types as well as the influence of different deep learning calculation processes, resulting in unpredictable training and test results. The purpose of this research is to evaluate (1) data preprocessing, (2) the number of classification targets, and (3) convolutional neural network (CNN) approaches combined … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
5
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
5

Relationship

0
5

Authors

Journals

citations
Cited by 6 publications
(5 citation statements)
references
References 44 publications
0
5
0
Order By: Relevance
“…In addition, there is no way to validate the satellite imagery‐derived NDVI based on images from the past. Moreover, the resolution of satellite‐based remote sensing images is not high enough to distinguish plant species and to reflect the vegetation succession (Li et al, 2014; Lin & Chuang, 2021; Xie, Sha, & Yu, 2008). For example, little change in NDVI (0.65 in 1998 to 0.71 in 2016) was found at Yangjuangou (Figure 10d and Figure 7i), despite the vegetation succession (from grassland to shrubland) occurring during this period (Wang et al, 2011).…”
Section: Resultsmentioning
confidence: 99%
“…In addition, there is no way to validate the satellite imagery‐derived NDVI based on images from the past. Moreover, the resolution of satellite‐based remote sensing images is not high enough to distinguish plant species and to reflect the vegetation succession (Li et al, 2014; Lin & Chuang, 2021; Xie, Sha, & Yu, 2008). For example, little change in NDVI (0.65 in 1998 to 0.71 in 2016) was found at Yangjuangou (Figure 10d and Figure 7i), despite the vegetation succession (from grassland to shrubland) occurring during this period (Wang et al, 2011).…”
Section: Resultsmentioning
confidence: 99%
“…Likewise, DL models are used in conjunction with RGB, multispectral, and hyper-spectral images, to perform different tasks concerning the assessment of forest health. Lin and Chuang (2021) used deep convolutional neural networks ResNet50, VGG19, and SegNet to extract features from aerial RGB pictures to perform tree classification. However the initial results showed poor performance based on accuracy; thus the authors proposed a simplification of the images using Principal Component Analysis, selecting only the most important features of the images.…”
Section: Deep Learningmentioning
confidence: 99%
“…The use of high-resolution cameras has allowed researchers to couple them with deep convolutional neural networks (Osco et al, 2021). Using deep learning structures alongside high-resolution aerial images has had good results in individual tree crown segmentation (Lin and Chuang, 2021;Onishi and Ise, 2021;. Other applications of deep convolutional neural networks are tree identification from aerial RGB and multispectral images, using temporal information has also been explored by researchers with the aid of recurrent convolutional neural networks (Feng et al, 2020).…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…In contrast, deep learning allows end-to-end learning without recognizing human intervention in its training process, and its deeper architecture of higher complexity allows it to learn more complex features autonomously [22], avoiding complex feature engineering. Among the existing deep learning algorithms, convolutional neural networks (CNNs) have been applied to the classification of land cover types [23][24][25]. Fu et al [26] used DeepLabV3+ and PSPNet algorithms to classify mangrove communities and both achieved over 86% overall accuracy.…”
Section: Introductionmentioning
confidence: 99%