LiDAR technology can provide very detailed and highly accurate geospatial information on an urban scene for the creation of Virtual Geographic Environments (VGEs) for different applications. However, automatic 3D modeling and feature recognition from LiDAR point clouds are very complex tasks. This becomes even more complex when the data is incomplete (occlusion problem) or uncertain. In this paper, we propose to build a knowledge base comprising of ontology and semantic rules aiming at automatic feature recognition from point clouds in support of 3D modeling. First, several modules for ontology are defined from different perspectives to describe an urban scene. For instance, the spatial relations module allows the formalized representation of possible topological relations extracted from point clouds. Then, a knowledge base is proposed that contains different concepts, their properties and their relations, together with constraints and semantic rules. Then, instances and their specific relations form an urban scene and are added to the knowledge base as facts. Based on the knowledge and semantic rules, a reasoning process is carried out to extract semantic features of the objects and their components in the urban scene. Finally, several experiments are presented to show the validity of our approach to recognize different semantic features of buildings from LiDAR point clouds.
Nowadays, cultural and historical built heritage can be more effectively preserved, valorised and documented using advanced geospatial technologies. In such a context, there is a major issue concerning the automation of the process and the extraction of useful information from a huge amount of spatial information acquired by means of advanced survey techniques (i.e., highly detailed LiDAR point clouds). In particular, in the case of historical built heritage (HBH) there are very few effective efforts. Therefore, in this paper, the focus is on establishing the connections between semantic and geometrical information in order to generate a parametric, structured model from point clouds using ontology as an effective approach for the formal conceptualisation of application domains. Hence, in this paper, an ontological schema is proposed to structure HBH representations, starting with international standards, vocabularies, and ontologies (CityGML-Geography Markup Language, International Committee for Documentation conceptual reference model (CIDOC-CRM), Industry Foundation Classes (IFC), Getty Art and Architecture Thesaurus (AAT), as well as reasoning about morphology of historical centres by analysis of real case studies) to represent the built and architecture domain. The validation of such schema is carried out by means of its use to guide the segmentation of a LiDAR point cloud from a castle, which is later used to generate parametric geometries to be used in a historical building information model (HBIM).
ABSTRACT:Topological relations are fundamental for qualitative description, querying and analysis of a 3D scene. Although topological relations for 2D objects have been extensively studied and implemented in GIS applications, their direct extension to 3D is very challenging and they cannot be directly applied to represent relations between components of complex 3D objects represented by 3D B-Rep models in 3 R . Herein we present an extended Region Connection Calculus (RCC) model to express and formalize topological relations between planar regions for creating 3D model represented by Boundary Representation model in 3 R . We proposed a new dimension extended 9-Intersection model to represent the basic relations among components of a complex object, including disjoint, meet and intersect. The last element in 3*3 matrix records the details of connection through the common parts of two regions and the intersecting line of two planes. Additionally, this model can deal with the case of planar regions with holes. Finally, the geometric information is transformed into a list of strings consisting of topological relations between two planar regions and detailed connection information. The experiments show that the proposed approach helps to identify topological relations of planar segments of point cloud automatically.* Corresponding author INTRDUCTIONSpatial relations include topological, metric and directional relations and together with semantic information are used for describing a scene qualitatively (Mark, 1994). Topological relations between geographical objects are necessary for spatial analysis in GIS. These relations can be queried and analysed independently from geographic coordinate system definition and the specific location of objects. Topological relations describe relative spatial relations with respect to reference objects. Hence topological relations are invariant and do not change with topological transformations, such as translation, scaling, and rotation (Egenhofer, 1990b).In general, topological relations between spatial objects are derived from Region Connection Calculus (RCC-8) (Egenhofer, 1989;Egenhofer, 1991) Here, a region is defined as a 2-cell with a non-empty, connected interior (Egenhofer, 1990a). Additionally, the 4-Intersection Model (4IM) (Egenhofer, 1991), 9-Intersection Model(9IM) (Clementini, 1993) and Dimensionally Extended models (DE) (Clementini, 1993) are widely adopted and implemented for describing topological relations for spatial analysis. Topological relations between spatial objects can be described based on relations defined for 2D regions in RCC model. Basic relations between two regions include disjoint, meet, overlap, contain, cover, coveredBy, containedBy and equal (Egenhofer, 1990b;Randell, 1992).The definitions of topological relations between spatial objects in 3 R are closely related to 3D objects models. A 3D spatial object can be modelled as a solid geometry or represented by its boundaries. Thus, topological relations between spatial objects in 3 R can be divid...
<p><strong>Abstract.</strong> Automatic semantic segmentation of point clouds observed in a 3D complex urban scene is a challenging issue. Semantic segmentation of urban scenes based on machine learning algorithm requires appropriate features to distinguish objects from mobile terrestrial and airborne LiDAR point clouds in point level. In this paper, we propose a pointwise semantic segmentation method based on our proposed features derived from Difference of Normal and the features “directional height above” that compare height difference between a given point and neighbors in eight directions in addition to the features based on normal estimation. Random forest classifier is chosen to classify points in mobile terrestrial and airborne LiDAR point clouds. The results obtained from our experiments show that the proposed features are effective for semantic segmentation of mobile terrestrial and airborne LiDAR point clouds, especially for vegetation, building and ground classes in an airborne LiDAR point clouds in urban areas.</p>
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.