2016
DOI: 10.1109/jstars.2016.2569162
|View full text |Cite
|
Sign up to set email alerts
|

Processing of Extremely High-Resolution LiDAR and RGB Data: Outcome of the 2015 IEEE GRSS Data Fusion Contest–Part A: 2-D Contest

Abstract: In this paper, we discuss the scientific outcomes of the 2015 data fusion contest organized by the Image Analysis and Data Fusion Technical Committee (IADF TC) of the IEEE Geoscience and Remote Sensing Society (IEEE GRSS). As for previous years, the IADF TC organized a data fusion contest aiming at fostering new ideas and solutions for multisource studies. The 2015 edition of the contest proposed a multiresolution and multisensorial challenge involving extremely high-resolution RGB images and a three-dimension… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
79
0
1

Year Published

2016
2016
2021
2021

Publication Types

Select...
6
2
2

Relationship

5
5

Authors

Journals

citations
Cited by 106 publications
(84 citation statements)
references
References 57 publications
0
79
0
1
Order By: Relevance
“…The filters learned by the first layer of the network will, therefore, depend on a stack of different sources. Studies considering this straightforward extension of neural networks are numerous and, in [126], authors compared networks trained on color RGB data (fine tuned from existing architectures) with networks, including a DSM channel on the 2015 Data Fusion Contest dataset over the city of Zeebruges [127] 2 . They use the CNN as a feature extractor and then use the features to train a SVM, predicting a single semantic class for the entire patch.…”
Section: Multimodal Data Fusionmentioning
confidence: 99%
“…The filters learned by the first layer of the network will, therefore, depend on a stack of different sources. Studies considering this straightforward extension of neural networks are numerous and, in [126], authors compared networks trained on color RGB data (fine tuned from existing architectures) with networks, including a DSM channel on the 2015 Data Fusion Contest dataset over the city of Zeebruges [127] 2 . They use the CNN as a feature extractor and then use the features to train a SVM, predicting a single semantic class for the entire patch.…”
Section: Multimodal Data Fusionmentioning
confidence: 99%
“…The ISPRS scientific research agenda reported in Chen et al (2015) identified an ongoing need for benchmarking in photogrammetry (Commission II) and open geospatial science in spatial information science (Commission IV). Recent efforts toward this goal include work by Rottensteiner et al (2014), Nex et al (2015), Campos-Taberner et al (2016), Koch et al (2016), and Wang et al (2016). Most publicly available benchmark datasets for 3D urban scene modeling have consisted of high resolution airborne imagery and lidar suitable for 3D modeling on a relatively modest scale.…”
Section: Introductionmentioning
confidence: 99%
“…More recently, Bearman et al [46] exploited a point-wise annotation for semantic segmentation, which creatively makes a better trade-off between training annotation cost and accuracy. In the area of remote sensing, Camps-Valls and Romero et al [47,48] proposed the use of greedy layer-wise unsupervised pre-training that learns sparse features for remote sensing image classification. Tschannen et al [49] introduced a structured CNNs that employed Haar wavelet-based trees for identifying the semantic category of every pixel of remote sensing image.…”
Section: Related Workmentioning
confidence: 99%