2020
DOI: 10.3390/rs12081289
|View full text |Cite
|
Sign up to set email alerts
|

Generalized Sparse Convolutional Neural Networks for Semantic Segmentation of Point Clouds Derived from Tri-Stereo Satellite Imagery

Abstract: We studied the applicability of point clouds derived from tri-stereo satellite imagery for semantic segmentation for generalized sparse convolutional neural networks by the example of an Austrian study area. We examined, in particular, if the distorted geometric information, in addition to color, influences the performance of segmenting clutter, roads, buildings, trees, and vehicles. In this regard, we trained a fully convolutional neural network that uses generalized sparse convolution one time solely on 3D g… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
15
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
8

Relationship

0
8

Authors

Journals

citations
Cited by 12 publications
(15 citation statements)
references
References 139 publications
(143 reference statements)
0
15
0
Order By: Relevance
“…Segmentation operations with point clouds were performed directly on 3D points rather than on projected surfaces or voxels. Bachhofner et al (2020) used U-Net architecture-based generalized sparse convolutional neural network (GSCNN) built with sparse convolution blocks to segment 3D points generated from tri-stereo satellite imagery [85]. The segmentation of archeological and cultural heritage sites was performed with a dynamic graph convolution neural network (DGCNN) constructed with several blocks of edge convolutional layers [74].…”
Section: Image Segmentationmentioning
confidence: 99%
“…Segmentation operations with point clouds were performed directly on 3D points rather than on projected surfaces or voxels. Bachhofner et al (2020) used U-Net architecture-based generalized sparse convolutional neural network (GSCNN) built with sparse convolution blocks to segment 3D points generated from tri-stereo satellite imagery [85]. The segmentation of archeological and cultural heritage sites was performed with a dynamic graph convolution neural network (DGCNN) constructed with several blocks of edge convolutional layers [74].…”
Section: Image Segmentationmentioning
confidence: 99%
“…Sparse signals are a natural representation of images obtained from depth sensors. Using sparse layers and inputs in CNNs is not as popular as dense ones, but it has also been considered (e.g., [22,23]). Some kinds of sparsity are available in commonly used deep learning libraries.…”
Section: Related Researchmentioning
confidence: 99%
“…In addition to the hand-crafted 3D geometric features, the RGB colour information in each point was used for training the classifier. For the model learning process we used a total of 17 features: Red, Green, Blue, EchoNumber, Number of echos, Amplitude, Normal X, Normal Y, Normal Z, Normal sigma0, linearity, planarity, omnivariance, EchoRatio, NormalizedZ, dZRange, and dZRank (Bachhofner et al, 2020;Waldhauser et al, 2014). Like all supervised classification methods, the adopted decision tree requires training data, which are used by the machine learning algorithm to build the classification model.…”
Section: Supervised Classificationmentioning
confidence: 99%
“…The classification tree seeks to partition the entire feature space of a data set, one variable at a time, by selecting a variable and an appropriate splitting value (Waldhauser et al, 2014). The decision tree is trained with the following hyperparameters: 0.00001 complexity factor, minimum 20 observations existent in a node (for splitting), at least 7 observations for each leaf node, a maximum depth of 30 for the tree, 5 competitor splits, and 5 surrogate splits (Bachhofner et al, 2020). Finally, to estimate how accurately our predictive model is, we test it on the validation dataset, containing labelled points.…”
Section: Supervised Classificationmentioning
confidence: 99%