2015
DOI: 10.1080/01431161.2015.1054049
|View full text |Cite|
|
Sign up to set email alerts
|

Road network extraction: a neural-dynamic framework based on deep learning and a finite state machine

Abstract: Extracting road networks from very-high-resolution (VHR) aerial and satellite imagery has been a long-standing problem. In this article, a neural-dynamic tracking framework is proposed to extract road networks based on deep convolutional neural networks (DNN) and a finite state machine (FSM). Inspired by autonomous mobile systems, the authors train a DNN to recognize the pattern of input data, which is an image patch extracted in a detection window centred at the current location of the tracker. The pattern is… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
84
0

Year Published

2016
2016
2023
2023

Publication Types

Select...
3
3
1
1

Relationship

0
8

Authors

Journals

citations
Cited by 147 publications
(87 citation statements)
references
References 41 publications
0
84
0
Order By: Relevance
“…The evaluation results verify the efficiency of our method in terms of all three metrics and clearly show that it outperforms the other methods in this dataset. Focusing on the main roads, the result of our extraction approach is compared with the results of Poullis 2014 as well as two other methods proposed by Wang et al [15] and Poullis et al [27], which henceforth we cite them as "Wang 2015" and "Poullis 2010". In Table 2, completeness, correctness, and quality are listed for the results of all four methods.…”
Section: Second Experimentsmentioning
confidence: 99%
See 2 more Smart Citations
“…The evaluation results verify the efficiency of our method in terms of all three metrics and clearly show that it outperforms the other methods in this dataset. Focusing on the main roads, the result of our extraction approach is compared with the results of Poullis 2014 as well as two other methods proposed by Wang et al [15] and Poullis et al [27], which henceforth we cite them as "Wang 2015" and "Poullis 2010". In Table 2, completeness, correctness, and quality are listed for the results of all four methods.…”
Section: Second Experimentsmentioning
confidence: 99%
“…The selected part of the scene has 512ˆ512 pixels, with a "nominal" spatial resolution of 1 m and three spectral bands. With "nominal", we emphasize that this data originates from screen capturing of a Wikimapia image [10,13,15] and the original resolution of the recorded image might have been different. The image of the first dataset and its ground truth is depicted in Figure 5a,b, respectively.…”
Section: First Experimentsmentioning
confidence: 99%
See 1 more Smart Citation
“…Inspired from these approaches, we extract the urban elements of a particular area from satellite images using deep learning to capture their representative features. Similar approaches extract road networks using neural networks for dynamic environments [18] from LiDAR data [19], using line integrals [20] and using image processing approaches [21][22][23]. In our approach, to provide scalability across countries and terrains, we have explored and modified state-of-the-art image segmentation networks.…”
Section: Related Workmentioning
confidence: 99%
“…Unlike unsupervised learning, more than one features-other than color-can be extracted: line, shape, and texture, among others. The traditional deep learning methods such as deep convolutional neural networks (DCNN) [14] and [3], deep deconvolutional neural networks (DeCNN) [5], recurrent neural network, namely reSeg [15], and fully convolutional networks [4]. However, are all suffering from the accuracy performance issues.…”
Section: Introductionmentioning
confidence: 99%