2009
DOI: 10.1016/j.rse.2009.04.007
|View full text |Cite
|
Sign up to set email alerts
|

Object-based land cover classification of shaded areas in high spatial resolution imagery of urban areas: A comparison study

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

4
184
1
1

Year Published

2012
2012
2022
2022

Publication Types

Select...
4
2
2

Relationship

0
8

Authors

Journals

citations
Cited by 236 publications
(190 citation statements)
references
References 23 publications
4
184
1
1
Order By: Relevance
“…In the case that both spectral and spatial characteristics of objects are quite vague in GE imagery, other ancillary data such as DEM, slope and aspect could be taken into consideration and may improve the classification accuracy [50,51]. It is expected that the inclusion of DEM data into the future list could improve the identification capacity between woodland and grassland types, as well as between buildings and shadows, which are significantly confused in this study [42,52].…”
Section: Potentials Analysis Of Ge Imagery For Land Use/cover Mappingmentioning
confidence: 94%
See 1 more Smart Citation
“…In the case that both spectral and spatial characteristics of objects are quite vague in GE imagery, other ancillary data such as DEM, slope and aspect could be taken into consideration and may improve the classification accuracy [50,51]. It is expected that the inclusion of DEM data into the future list could improve the identification capacity between woodland and grassland types, as well as between buildings and shadows, which are significantly confused in this study [42,52].…”
Section: Potentials Analysis Of Ge Imagery For Land Use/cover Mappingmentioning
confidence: 94%
“…The accuracy results were then compared to check their difference. This method was widely used in existing studies [27,41,42] as it can minimize the statistical and human bias in the process of validation sample selections [43]. It was implemented in three steps in this study.…”
Section: Classification Accuracy Assessment and Comparisonmentioning
confidence: 99%
“…In an optical image, shadows are formed by obstructing direct light. The lower DN values in the shadow areas cause partial or total loss of radiometric information in the affected areas (Dare, 2005;Yuan, 2008;Zhou et al, 2009), the loss of radiometric information is the absence of direct light (Adeline et al, 2013;Wu et al, 2014 In addition, the mean DN values of all plots for the R, G, and B bands in the non-shadow area are non-vegetation>water bodies>vegetation, whereas those in the shadow area are water bodies>non-vegetation>vegetation (Table 3 and Table 4) (Figure 2). Regardless of shadow or non-shadow, the DN values for vegetation in the NIR band are significantly highest than water bodies and non-vegetation, indicating that NIR effectively reflects vegetation conditions (Table 3 and Table 4) (Figure 2).…”
Section: Spectral Characteristics Of the Shadow Areamentioning
confidence: 99%
“…A number of studies have investigated the problems in the first stage, i.e., shadow detection and removal in either high resolution satellite imagery (e.g., [1,7,8]) or aerial photography (e.g., [9][10][11][12][13][14]). An invariant color space based non-linear transformation was proposed [7], while histogram thresholding was used [1,8] to discriminate shadows from non-shadow areas in QuickBird and IKONOS images.…”
Section: Introductionmentioning
confidence: 99%
“…An invariant color space based non-linear transformation was proposed [7], while histogram thresholding was used [1,8] to discriminate shadows from non-shadow areas in QuickBird and IKONOS images. Similar approaches have also been used to detect shadows in aerial photography, such as invariant color space based transformations [9] and models [10,11], and histogram thresholding [12]. Additionally, three-dimensional models have been developed if the a priori knowledge of the sensor, the illumination, and the 3-D geometry of the scene are available [13,14].…”
Section: Introductionmentioning
confidence: 99%