2013 11th International Workshop on Content-Based Multimedia Indexing (CBMI) 2013
DOI: 10.1109/cbmi.2013.6576580
|View full text |Cite
|
Sign up to set email alerts
|

Object extraction in urban environments from large-scale dynamic point cloud datasets

Abstract: Abstract-In this paper, we introduce a system framework which can automatically interpret large point cloud datasets collected from dense urban areas by moving aerial or terrestrial Lidar platforms. We propose novel algorithms for region segmentation, motion analysis, object identification and population level scene analysis which steps can highly contribute to organize the data into a semantically indexed structure, enabling quick responses for content based user queries about the environment. The system is t… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2013
2013
2022
2022

Publication Types

Select...
2
2
1

Relationship

1
4

Authors

Journals

citations
Cited by 5 publications
(2 citation statements)
references
References 6 publications
0
2
0
Order By: Relevance
“…In our system, point cloud segmentation is achieved by a grid based approach [9], [10]. In the literature various robust approaches are proposed for planar ground modeling such as RANSAC.…”
Section: A Point Cloud Segmentationmentioning
confidence: 99%
“…In our system, point cloud segmentation is achieved by a grid based approach [9], [10]. In the literature various robust approaches are proposed for planar ground modeling such as RANSAC.…”
Section: A Point Cloud Segmentationmentioning
confidence: 99%
“…After the classification for each voxel, a clustering process generates objects which can be also classified. Ground, short structures and tall structures may be classified from a voxel map through height thresholds, also vegetation may be detected with the multiecho in the laser rangefinder [14]. However, this method is restricted to urban scenarios where no slopes on the ground are assumed.…”
Section: Introductionmentioning
confidence: 99%