2016
DOI: 10.1109/jstars.2016.2570234
|View full text |Cite
|
Sign up to set email alerts
|

Shadow Detection and Removal for Occluded Object Information Recovery in Urban High-Resolution Panchromatic Satellite Images

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
21
0

Year Published

2017
2017
2023
2023

Publication Types

Select...
7
3

Relationship

1
9

Authors

Journals

citations
Cited by 53 publications
(21 citation statements)
references
References 30 publications
0
21
0
Order By: Relevance
“…The projection-contours of the building were extracted from each SGSPs using the image matting methods [ 13 ] and some manual correction. Since the building has a complex 3D shape, we initialized it into 27 units ( ξ model = { ξ b_i }, i = 1, 2, …, 27) as Figure 9 e, including 25 outer-units and 2 inner-units.…”
Section: Methodsmentioning
confidence: 99%
“…The projection-contours of the building were extracted from each SGSPs using the image matting methods [ 13 ] and some manual correction. Since the building has a complex 3D shape, we initialized it into 27 units ( ξ model = { ξ b_i }, i = 1, 2, …, 27) as Figure 9 e, including 25 outer-units and 2 inner-units.…”
Section: Methodsmentioning
confidence: 99%
“…Method [19] for both natural & satellite images is appropriate Shadows may occlude objects in highly resolution panchromatic satellite images, particularly in urban scenes, to reduce or lose their detail. Shadow removal is an important processing method for the analysis & application of the images [20] to recover the occluded details of objects. Authors in [21] suggested an approach to shadow removal focused on a subregion that suits the transition of illumination.…”
Section: Literature Surveymentioning
confidence: 99%
“…However, this was difficult, since there were no ground truth data. In this paper, we referred to the quantitative analysis method of shadow removal [34,35], which selected the samples to compare the gray values between the restored pixels in the shaded regions and adjacent pixels in the nonshaded regions for the same land cover, since they should share similar values.…”
Section: Quantitative Analysismentioning
confidence: 99%