2017 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW) 2017
DOI: 10.1109/cvprw.2017.199
|View full text |Cite
|
Sign up to set email alerts
|

Joint Learning from Earth Observation and OpenStreetMap Data to Get Faster Better Semantic Maps

Abstract: In this work, we investigate the use of OpenStreetMap data for semantic labeling of Earth Observation images. Deep neural networks have been used in the past for remote sensing data classification from various sensors, including multispectral, hyperspectral, SAR and LiDAR data. While OpenStreetMap has already been used as ground truth data for training such networks, this abundant data source remains rarely exploited as an input information layer. In this paper, we study different use cases and deep network ar… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
66
0

Year Published

2018
2018
2024
2024

Publication Types

Select...
4
3
2

Relationship

1
8

Authors

Journals

citations
Cited by 90 publications
(66 citation statements)
references
References 37 publications
0
66
0
Order By: Relevance
“…The work of [2] investigated late fusion of Lidar and optical data for semantic segmentation using prediction fusion that required no feature engineering by combining two classifiers with a deep learning endto-end approach. This was also investigated in [36] to fuse optical and Open-StreetMap for semantic labeling. During the Data Fusion Contest (DFC) 2015, [29] proposed an early fusion scheme of Lidar and optical data based on a stack of deep features for superpixel-based classification of urban remote sensed data.…”
Section: Related Workmentioning
confidence: 99%
“…The work of [2] investigated late fusion of Lidar and optical data for semantic segmentation using prediction fusion that required no feature engineering by combining two classifiers with a deep learning endto-end approach. This was also investigated in [36] to fuse optical and Open-StreetMap for semantic labeling. During the Data Fusion Contest (DFC) 2015, [29] proposed an early fusion scheme of Lidar and optical data based on a stack of deep features for superpixel-based classification of urban remote sensed data.…”
Section: Related Workmentioning
confidence: 99%
“…When integrating HR/VHR imagery acquired at different azimuth and elevation angles, features such as building roofs show offsets similar to those caused by topography. These offsets are particularly problematic for (a) training repeated mappings of the same features, and/or (b) when using an existing vector dataset such as OpenStreetMap (OSM) as TD [133][134][135].…”
Section: Design-related Errorsmentioning
confidence: 99%
“…3) Third Model (v3): OpenStreetMap (OSM) 2 data is usually used for semantic labelling in RS for various purpose such as a ground truth, automatic labelling, etc. In [19], OSM is used for automatic object extraction or to build better semantic maps with the help of deep learning techniques. v3 applies the same concept.…”
Section: ) Second Model (V2)mentioning
confidence: 99%