This manuscript provides a robust framework for the extraction of common structural components, such as columns, from terrestrial laser scanning point clouds acquired at regular rectangular concrete construction projects. The proposed framework utilizes geometric primitive as well as relationship-based reasoning between objects to semantically label point clouds. The framework then compares the extracted objects to the planned building information model (BIM) to automatically identify the as-built schedule and dimensional discrepancies. A novel method was also developed to remove redundant points of a newly acquired scan to detect changes between consecutive scans independent of the planned BIM. Five sets of point cloud data were acquired from the same construction site at different time intervals to assess the effectiveness of the proposed framework. In all datasets, the framework successfully extracted 132 out of 133 columns and achieved an accuracy of 98.79% for removing redundant surfaces. The framework successfully determined the progress of concrete work at each epoch in both activity and project levels through earned value analysis. It was also shown that the dimensions of 127 out of the 132 columns and all the slabs complied with those in the planned BIM.
Automated segmentation of planar and linear features of point clouds acquired from construction sites is essential for the automatic extraction of building construction elements such as columns, beams and slabs. However, many planar and linear segmentation methods use scene-dependent similarity thresholds that may not provide generalizable solutions for all environments. In addition, outliers exist in construction site point clouds due to data artefacts caused by moving objects, occlusions and dust. To address these concerns, a novel method for robust classification and segmentation of planar and linear features is proposed. First, coplanar and collinear points are classified through a robust principal components analysis procedure. The classified points are then grouped using a new robust clustering method, the robust complete linkage method. A robust method is also proposed to extract the points of flat-slab floors and/or ceilings independent of the aforementioned stages to improve computational efficiency. The applicability of the proposed method is evaluated in eight datasets acquired from a complex laboratory environment and two construction sites at the University of Calgary. The precision, recall, and accuracy of the segmentation at both construction sites were 96.8%, 97.7% and 95%, respectively. These results demonstrate the suitability of the proposed method for robust segmentation of planar and linear features of contaminated datasets, such as those collected from construction sites.
ABSTRACT:The application of terrestrial laser scanners (TLSs) on construction sites for automating construction progress monitoring and controlling structural dimension compliance is growing markedly. However, current research in construction management relies on the planned building information model (BIM) to assign the accumulated point clouds to their corresponding structural elements, which may not be reliable in cases where the dimensions of the as-built structure differ from those of the planned model and/or the planned model is not available with sufficient detail. In addition outliers exist in construction site datasets due to data artefacts caused by moving objects, occlusions and dust. In order to overcome the aforementioned limitations, a novel method for robust classification and segmentation of planar and linear features is proposed to reduce the effects of outliers present in the LiDAR data collected from construction sites. First, coplanar and collinear points are classified through a robust principal components analysis procedure. The classified points are then grouped using a robust clustering method. A method is also proposed to robustly extract the points belonging to the flat-slab floors and/or ceilings without performing the aforementioned stages in order to preserve computational efficiency. The applicability of the proposed method is investigated in two scenarios, namely, a laboratory with 30 million points and an actual construction site with over 150 million points. The results obtained by the two experiments validate the suitability of the proposed method for robust segmentation of planar and linear features in contaminated datasets, such as those collected from construction sites.
This study presented established methods, along with new algorithmic developments, to automate point cloud processing in support of the Field Information Modeling (FIM)™ framework. More specifically, given a multi-dimensional (n-D) designed information model, and the point cloud’s spatial uncertainty, the problem of automatic assignment of point clouds to their corresponding model elements was considered. The methods addressed two classes of field conditions, namely (i) negligible construction errors and (ii) the existence of construction errors. Emphasis was given to defining the assumptions, potentials, and limitations of each method in practical settings. Considering the shortcomings of current frameworks, three generic algorithms were designed to address the point-cloud-to-model assignment. The algorithms include new developments for (i) point cloud vs. model comparison (negligible construction errors), (ii) robust point neighborhood definition, and (iii) Monte-Carlo-based point-cloud-to-model surface hypothesis testing (existence of construction errors). The effectiveness of the new methods was demonstrated in real-world point clouds, acquired from construction projects, with promising results. For the overall problem of point-cloud-to-model assignment, the proposed point cloud vs. model and point-cloud-to-model hypothesis testing methods achieved F-measures of 99.3% and 98.4%, respectively, on real-world datasets.
This paper outlines a new framework for the calibration of optical instruments, in particular smartphone cameras, using highly redundant circular black‐and‐white target fields. New methods were introduced for (i) matching targets between images; (ii) adjusting the systematic eccentricity error of target centres; and (iii) iteratively improving the calibration solution through a free‐network self‐calibrating bundle adjustment. The proposed method effectively matched circular targets in 270 smartphone images, taken within a calibration laboratory, with robustness to type II errors (false negatives). The proposed eccentricity adjustment, which requires only camera projective matrices from two views, behaved comparably to available closed‐form solutions, which require additional a priori object‐space target information. Finally, specifically for the case of mobile devices, the calibration parameters obtained using the framework were found to be superior compared to in situ calibration for estimating the 3D reconstructed radius of a mechanical pipe (approximately 45% improvement on average).
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.