2018
DOI: 10.5194/isprs-annals-iv-3-135-2018
|View full text |Cite
|
Sign up to set email alerts
|

A Super Voxel-Based Riemannian Graph for Multi Scale Segmentation of Lidar Point Clouds

Abstract: ABSTRACT:Automatically segmenting LiDAR points into respective independent partitions has become a topic of great importance in photogrammetry, remote sensing and computer vision. In this paper, we cast the problem of point cloud segmentation as a graph optimization problem by constructing a Riemannian graph. The scale space of the observed scene is explored by an octree-based oversegmentation with different depths. The over-segmentation produces many super voxels which restrict the structure of the scene and … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
4
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
5
2

Relationship

0
7

Authors

Journals

citations
Cited by 7 publications
(4 citation statements)
references
References 30 publications
0
4
0
Order By: Relevance
“…PCD acquired from large objects such as the indoors have numerous points, requiring excessive time for data processing if object detection is conducted on all points [51,52]. Thus, PCD with many planar objects (e.g., walls, floors, and ceilings) such as the case with indoor environments have a low accuracy reduction rate for data through data decomposition, which is why voxels are mainly used in object detection [53][54][55]. However, if voxels were produced according to whether points exist, a voxel with many points and one created by outlier would be regarded as the same point.…”
Section: Space Decompositionmentioning
confidence: 99%
“…PCD acquired from large objects such as the indoors have numerous points, requiring excessive time for data processing if object detection is conducted on all points [51,52]. Thus, PCD with many planar objects (e.g., walls, floors, and ceilings) such as the case with indoor environments have a low accuracy reduction rate for data through data decomposition, which is why voxels are mainly used in object detection [53][54][55]. However, if voxels were produced according to whether points exist, a voxel with many points and one created by outlier would be regarded as the same point.…”
Section: Space Decompositionmentioning
confidence: 99%
“…Besides, the construction of graphical models or Markov networks also plays a vital role for the optimization or regularization, which should consider both the spatial relationship (e.g., topology) and defined weights (e.g., similarity or proximity) between points. The graph can be built not only based on a KNN structure [36], but also the one considering manifold structure, like Riemannian graph [44]. The optimization or regularization is achieved by solving the cost function formulated from the graphical models or Markov networks.…”
Section: Smoothing Of Labelsmentioning
confidence: 99%
“…Our proposed boundary refined supervoxel is based on the original VCCS supervoxel, consisting of two major steps, namely, the detection of boundary points, and the refinement of boundary points. In the first step, all the points of one supervoxel will be measured by the distance from the point to the center of the supervoxel considering the local curvature [44] exploring the spatial proximity of adjacent supervoxels in geodetic space.…”
Section: A Boundary Refined Supervoxelizationmentioning
confidence: 99%
“…This step will markedly simplify the complexity involved in subsequent steps of point cloud segmentation and classification within the object-based model and also reduces the computational cost. Similar to the superpixel methods in computer vision, recent developments in the 3D domain compute supervoxels from a set of voxels (Aijazi et al, 2013;Li, 2018;Papon et al, 2013) (Figure 1). In the current study, supervoxels were generated using a 3D modification of the Fractal Net Evolution Approach (FNEA) which was originally developed for coloured 2D image processing.…”
Section: Supervoxels For 3dmentioning
confidence: 99%