2018
DOI: 10.3390/s18082484
|View full text |Cite
|
Sign up to set email alerts
|

Using Deep Learning to Identify Utility Poles with Crossarms and Estimate Their Locations from Google Street View Images

Abstract: Traditional methods of detecting and mapping utility poles are inefficient and costly because of the demand for visual interpretation with quality data sources or intense field inspection. The advent of deep learning for object detection provides an opportunity for detecting utility poles from side-view optical images. In this study, we proposed using a deep learning-based method for automatically mapping roadside utility poles with crossarms (UPCs) from Google Street View (GSV) images. The method combines the… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
33
0

Year Published

2018
2018
2022
2022

Publication Types

Select...
5
3
1

Relationship

0
9

Authors

Journals

citations
Cited by 57 publications
(36 citation statements)
references
References 58 publications
0
33
0
Order By: Relevance
“…Deep neural networks are reported to be effective to extract useful semantic information from street view images [21,30,55]. In our study, semantic features of street view images are firstly extracted by Places-CNN which is a deep convolutional neural network used for ground-level scene recognition.…”
Section: Semantic Feature Extractionmentioning
confidence: 99%
“…Deep neural networks are reported to be effective to extract useful semantic information from street view images [21,30,55]. In our study, semantic features of street view images are firstly extracted by Places-CNN which is a deep convolutional neural network used for ground-level scene recognition.…”
Section: Semantic Feature Extractionmentioning
confidence: 99%
“…Street-level imagery data, such as Google Street View (GSV), provide extensive geographical coverage, and standardized, geocoded and high resolution images of the urban environment. Computer vision algorithms have been developed to process street-level imagery, measuring perceived urban safety [21], urban change [22], wealth [23], infrastructure [24], demographics [25] and building type classification [26]. In the context of urban trees, a small but growing number of studies have sought to develop computer vision approaches to address three key areas: (1) quantify the shade provision of urban canopy [27][28][29]; (2) catalog the location of urban trees [30,31]; and, (3) estimate the percent of urban street-level tree cover [15,20,32,33].…”
Section: Introductionmentioning
confidence: 99%
“…The same authors extend their approach by adding LiDAR data for object segmentation, triangulation, and monocular depth estimation for traffic lights . Zhang et al (2018) propose a CNN-based object detector for poles and apply a line-of-bearing method to estimate the geographic object position. In this paper, we advocate for a simplified version of (Wegner et al, 2016;Branson et al, 2018) for tree detection and geo-localization that is less costly to compute then a full end-to-end approach (Nassar et al, 2019b) at very large scale (i.e., 48 cities in California.…”
Section: Related Workmentioning
confidence: 99%