2019 IEEE Winter Conference on Applications of Computer Vision (WACV) 2019
DOI: 10.1109/wacv.2019.00167
|View full text |Cite
|
Sign up to set email alerts
|

Semantic Stereo for Incidental Satellite Images

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
104
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
4
3
2

Relationship

1
8

Authors

Journals

citations
Cited by 135 publications
(104 citation statements)
references
References 35 publications
0
104
0
Order By: Relevance
“…We employ the IARPA Multi-View Stereo 3D Mapping Challenge dataset [2], which includes 47 DigitalGlobe WorldView-3 images, with 30 cm nadir resolution, collected between 2014 and 2016 over Buenos Aires; and the dataset comprised in the 2019 IEEE GRSS Data Fusion Contest [17,1] with 26 DigitalGlobe WorldView-3 images collected between 2014 and 2016 over Jacksonville. The completeness (percentage of points where the absolute difference is less than 1 meter) of the output models with respect to the lidar ground truth is used as the main evaluation metric to assess the performance of the different methods.…”
Section: Introductionmentioning
confidence: 99%
“…We employ the IARPA Multi-View Stereo 3D Mapping Challenge dataset [2], which includes 47 DigitalGlobe WorldView-3 images, with 30 cm nadir resolution, collected between 2014 and 2016 over Buenos Aires; and the dataset comprised in the 2019 IEEE GRSS Data Fusion Contest [17,1] with 26 DigitalGlobe WorldView-3 images collected between 2014 and 2016 over Jacksonville. The completeness (percentage of points where the absolute difference is less than 1 meter) of the output models with respect to the lidar ground truth is used as the main evaluation metric to assess the performance of the different methods.…”
Section: Introductionmentioning
confidence: 99%
“…The 2019 IEEE GRSS data fusion contest provides the grss_dfc_2019 dataset [41], a subset of the Urban Semantic 3D (US3D) [17] data, including multi-view, multi-band satellite images and ground truth geometric and semantic labels. Several tasks are designed to reconstruct both a 3D geometric model and a segmentation of semantic classes for urban scenes, aiming at further supporting the research in stereo and semantic 3D reconstruction using machine intelligence and deep learning.…”
Section: Satellite Dataset Experimentsmentioning
confidence: 99%
“…Afterwards, the standard SGM and SGM-Forest are recapped in Section 3, followed by our extension of SGM-Forest based on multi-label classification. In Section 4, the methods are tested on two close-range stereo matching datasets, Middlebury and ETH3D benchmarks [12][13][14][15], an airborne dataset, EuroSDR image matching benchmark [16], and a satellite dataset from the 2019 IEEE GRSS data fusion contest [17,18]. The comparison is recorded between original SGM-Forest based on single-label classification (termed SGM-ForestS for the follow-up) and our proposed implementation based on multi-label classification (termed SGM-ForestM).…”
Section: Introductionmentioning
confidence: 99%
“…The contest made use of Urban Semantic 3D data, a large-scale public data set ( Figure 1) [15]. More than 320 GB of data have been released for training and evaluation, covering approximately 20 km 2 ◗ Track 3: In the multiview semantic stereo challenge, the goal was to predict semantic labels and a DSM, given multiple images for each geographic tile.…”
Section: The 2019 Dfcmentioning
confidence: 99%