2018
DOI: 10.1109/lgrs.2018.2802944
|View full text |Cite
|
Sign up to set email alerts
|

Road Extraction by Deep Residual U-Net

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

5
1,048
0
3

Year Published

2018
2018
2023
2023

Publication Types

Select...
7

Relationship

0
7

Authors

Journals

citations
Cited by 1,883 publications
(1,078 citation statements)
references
References 16 publications
5
1,048
0
3
Order By: Relevance
“…). Compared with Zhang et al 30 ,. we do not employ batch‐normalization, as it would introduce an unwanted scaling of the scatter.…”
Section: Methodsmentioning
confidence: 77%
See 2 more Smart Citations
“…). Compared with Zhang et al 30 ,. we do not employ batch‐normalization, as it would introduce an unwanted scaling of the scatter.…”
Section: Methodsmentioning
confidence: 77%
“…The second part (~(1ÀK)) was modeled using a residual UNet. 29,30 The latter models the low-frequency deviations on different resolution levels, with an increasing number of channels or "features". We used six levels, with 8, 16, 32, 64, 128, and 256 features ( Fig.…”
Section: C Network Design and Trainingmentioning
confidence: 99%
See 1 more Smart Citation
“…FCN transforms a segmentation task into a pixel‐level classification task and opens a new method of image semantic segmentation based on deep convolutional networks. FCN has established a new advanced technology in the semantic segmentation of aeronautical optical images and successfully applied to satellite SAR images. Yao et al .…”
Section: Related Workmentioning
confidence: 99%
“…FCN transforms a segmentation task into a pixel-level classification task and opens a new method of image semantic segmentation based on deep convolutional networks. FCN has established a new advanced technology in the semantic segmentation of aeronautical optical images [9] and successfully applied to satellite SAR images. Yao et al [10] used pretrained FCNs on SAR images to classify buildings, land use, bodies of water and other natural areas, but unsatisfactory results are obtained for buildings.…”
Section: Spatial Information Deep Cnns (Dcnns) Havementioning
confidence: 99%